# Temporal > Documentation for Temporal --- # Temporal Documentation Source: https://docs.temporal.io/llms-full.txt --- --- # Temporal Platform Documentation > Build invincible applications This file contains all documentation content in a single document following the llmstxt.org standard. ## Security Controls for Temporal Cloud Temporal Cloud provides the capabilities of Self-Hosted Temporal as a managed service; it does not manage your applications or workers. Applications and services written using Temporal SDKs still run in your compute environment, and you have full control over how you secure your applications and services. These best practices ensure your Temporal Cloud environment adheres to the security guidelines recommended by our team. You can also learn more about our security practices, compliance posture, and subscribe for vulnerability (CVE) updates at https://trust.temporal.io/. If you have any concerns or questions, please reach out to your Account Executive or to our security team at security@temporal.io. :::tip **Stay Updated on Temporal Security Advisories:** Subscribe to Temporal’s security updates on the [Temporal Trust Portal](https://trust.temporal.io/) so you are aware of any patches or CVEs. While Temporal Cloud server-side updates are handled by the vendor, your Temporal SDKs (in application code) should be kept up-to-date. ::: ## Identity and Access Management Strong identity management in Temporal Cloud is crucial for ensuring secure access for your Temporal account. It’s critical that only authorized users and services can access your Temporal Cloud account and that each has the minimum necessary permissions needed for their role. ### Best Practices: #### 1. Enable [SAML Single Sign-on](https://docs.temporal.io/cloud/saml) (SSO) for User Access Integrate Temporal Cloud with your organization's identity provider via SAML 2.0 for centralized authentication. SSO allows you to enforce your corporate login policies (MFA, password complexity, etc.). When you configure SAML with Temporal Cloud, you can disable social logins (i.e. Microsoft, Google) by opening a support ticket. #### 2. Use Least-Privilege Roles for Temporal Cloud Users Temporal Cloud provides [preconfigured account-level roles](https://docs.temporal.io/cloud/users) (Account Owner, Finance Admin, Global Admin, Developer, Read-Only) and Namespace-level permissions. Assign users the lowest level of access they need. For example, give developers access only to the Namespaces they work on, and use read-only roles for auditors or reviewers. Regularly review user roles and remove or downgrade accounts that are no longer needed #### 3. Leverage SCIM or Automated User Provisioning When applicable, use [SCIM](https://docs.temporal.io/cloud/scim) or the Temporal Cloud user management API to automate adding and removing user accounts. This ensures timely removal of access when people change roles or leave the organization. #### Use Service Accounts for Automation For non-human access (CI/CD pipelines, backend services), use [Temporal Cloud Service Accounts](https://docs.temporal.io/cloud/service-accounts) instead of shared user logins. Service Accounts are machine identities that can be granted specific permissions without ties to an individual. Create separate Service Accounts with unique API keys for different applications or microservices, and apply least privilege to each (e.g. a service account that only has access to one Namespace). ## Secure Application Authentication and API Access Clients interact with the Temporal Service to initiate and manage Workflows, while Workers execute the business logic defined in Workflows and Activities in your own environment. A crucial aspect of strengthening your usage involves securing these interactions. Temporal Cloud offers two authentication methods for your applications: mutual TLS certificates and API keys. ### Best Practices: #### 1. Using Mutual TLS (mTLS) provides comprehensive security Temporal Cloud secures its gRPC endpoint per Namespace via mutual TLS. This means you provide a Certificate Authority (CA) certificate for your Namespace, and all your Temporal clients/workers must present client certificates signed by that CA. We recommend you enable mTLS for strong identity assurance of clients; it ensures only systems holding a valid certificate (issued by your trusted CA) can connect. Generate a private key and CA certificate (or use your enterprise CA) and upload the CA to Temporal Cloud. Do not share these certificates and associated keys beyond the authorized services. #### 2. Proactively manage and rotate certificates Track the expiration dates of your client and [Certificate Authority certificates](https://docs.temporal.io/cloud/certificates). Temporal Cloud trusts the uploaded CA; if it expires, all client authorizations will fail. Establish and automate a certificate rotation schedule (e.g. rotate client certificates quarterly and CA certificates annually, well before expiry). Temporal supports uploading a new CA certificate alongside the old one to allow seamless rollover. Always test new certificates in a staging environment if possible. #### 3. If you’re using API Keys, handle them with strict care Temporal Cloud API keys are an alternative to mTLS for authentication of SDKs, CLI, and automation. If you opt for API keys, handle them with strict care by enacting the following practices: - Keep them secret: store in a secrets manager, never in code or Git. - Rotate at least every 90 days: Temporal lets you create a new key, swap it in, then delete the old one. - One key per service/person: no sharing or reuse. - Monitor usage & revoke on anomalies: feed Temporal audit logs to SIEM. - Optional: Admins can disable all user API keys if your policy is “mTLS only.” ## Network Configuration and Isolation Although Temporal Cloud is a SaaS offering, you retain control over its networking configurations, allowing for tailored security measures. By minimizing public internet exposure and segmenting Temporal workflows into suitable network zones, you can significantly bolster security and reduce potential vulnerabilities. This approach ensures that your workflows are isolated and protected within your defined network boundaries, even while leveraging the benefits of a cloud-based service. ### Best Practices: #### 1. Use Private Connectivity Temporal Cloud supports private connectivity options such as [AWS PrivateLink](https://docs.temporal.io/cloud/connectivity/aws-connectivity) and [Google Cloud Private Service Connect](https://docs.temporal.io/cloud/connectivity/gcp-connectivity). If your infrastructure is in AWS or GCP, configure a PrivateLink/PSC endpoint for Temporal Cloud. This allows your workers and applications to reach Temporal Cloud over a private network path, avoiding traversal of the public internet. Private connectivity reduces the surface for man-in-the-middle attacks and can meet stringent network security policies. #### 2. Separate environments by Namespace Use [Temporal Namespaces](https://learn.temporal.io/best_practice_guides/managing_a_namespace/#2-use-domain-service-and-environment-to-name-namespaces) to isolate workflows for different environments or teams (e.g. development, staging, production). Each Namespace is logically segregated and cannot interact with others by default, providing a security boundary. Ensure that your production Namespace uses stricter network controls (e.g. only accessible from the prod network) and that credentials for it are separate from non-prod Namespaces. This limits the impact of any compromise in a lower environment, and as workflow data is only visible to users with access to that Namespace, separating environments by Namespace also enforces data-visibility boundaries. ## Data Protection and Encryption Temporal's data encryption capabilities ensure the security and confidentiality of your Workflows and provide protection without compromising performance. Protecting the data that you send to and store in Temporal Cloud is a joint responsibility. Temporal Cloud already encrypts all data at rest on the server side, but you can add additional layers of encryption and control. ### Best Practices: #### 1. Enable Client-Side Encryption for Workflow Data Temporal provides an optional [data conversion framework](https://docs.temporal.io/dataconversion) (Data Converter) and payload codec interface; customers must implement, deploy, and operate their own custom codec and manage encryption keys. In practice, this means you can encrypt any sensitive data before it is sent to Temporal Cloud and only decrypt it on the Client/Worker side. Because encryption keys stay under your control, you are responsible for key generation, secure storage, rotation, and versioning. Implementing this involves developing a custom codec plugin in your Temporal SDK and optionally (if you need to inspect decrypted payloads in the Web UI or CLI) deploying a dedicated codec server. #### 2. Encode Workflow Failure Details with a [Failure Converter](https://docs.temporal.io/failure-converter) Temporal’s default behavior copies error messages and call stacks as plain text, and this text is directly accessible in the Message field of Workflow Executions. If your failure messages and stack traces contain sensitive information, it is recommended that you configure the [Failure Converter](https://docs.temporal.io/failure-converter) to encrypt the error information. This would encrypt the `message` and `stack_trace` fields in the payloads. #### 3. Leverage Namespace Data Retention Policies Temporal Cloud Namespace has a [Retention Period](https://docs.temporal.io/temporal-service/temporal-server#retention-period) setting for workflow histories (1 to 90 days). Set an appropriate retention period to balance operational needs with security. Shorter retention means completed workflow data (history, payloads) is purged sooner, reducing the amount of sensitive data stored in the cloud at any time. Document your retention choices to align with your company’s data retention policies and regulatory requirements. For retention periods over 90 days, these can be exported to your own GCS or S3 buckets. ### Availability and Disaster Recovery Temporal Cloud’s platform is engineered for fault-tolerance out of the box, but you determine which Namespaces merit the very highest availability guarantees. Use the table below to decide when to turn on different High Availability models and how to operationalise them. | Namespace scope | Use Case | Uptime SLA | Recovery Time Objective (RTO) | Recovery Point Objective (RPO) | |-----------------|----------|------------|--------------------------------|--------------------------------| | **Single-Region** | **If your application is built for one region and does not have stringent high-availability or disaster recovery requirements.** | 99.9% | ≤ 8 hours | ≤ 8 hours | | **Same-Region Replication** | **If you want higher availability but your application is designed for a single region or if cross region latency doesn’t meet SLAs for application** | 99.99% | ≤ 20 minutes | Near-zero (≈ seconds) | | **Multi-Region Replication** | **If a disruption of your workflow will cause loss of revenue, poor end-user experience, or issues with regulatory compliance.** | 99.99% | ≤ 20 minutes | Near-zero (≈ seconds) | | **Multi-Cloud Replication** | **If you need the highest level of disaster tolerance, protecting against outages of an entire cloud provider (e.g., AWS or GCP)** | 99.99% | ≤ 20 minutes | Near-zero (≈ seconds) | ### Best Practices: #### 1. Identify Availability-sensitive Namespaces Run a business-impact analysis to flag workflows where a regional outage would cause significant customer, revenue, or safety impact. Identify Namespaces that are availability-sensitive where a regional outage may have outsized business impacts such as revenue loss, poor customer experience, or inability to meet legal obligations. #### 2. Enable High Availability for business critical use cases For many organizations, ensuring High Availability (HA) is required because of strict uptime requirements, compliance, and regulatory needs. For these critical use cases, enable High Availability features for specific namespaces for a [99.99% contractual SLA](https://docs.temporal.io/cloud/high-availability#high-availability-features). When choosing between [same-region, multi-region, and multi-cloud replication](https://docs.temporal.io/cloud/high-availability), it is recommended to use multi-region/multi-cloud replication to distribute your dependencies across regions. Using physically separated regions improves the fault tolerance of your application. By default, Temporal Cloud provides a [99.9% contractual SLA guarantee](https://docs.temporal.io/cloud/high-availability) against service errors for all namespaces. Note: [enabling HA features for namespaces will 2x the consumption cost.](https://docs.temporal.io/cloud/pricing#high-availability-features) --- ## Security - Temporal Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability). Learn why you should use Nexus in the [evaluation guide](/evaluate/nexus). ::: Temporal Cloud has built-in Nexus security. It provides secure Nexus connectivity across Namespaces with an mTLS secured Envoy mesh. Workers authenticate to their Namespace with mTLS client certificates or API keys, as allowed by their Namespace. Encryption for Nexus payloads is also supported, for example using shared symmetric keys and compatible Data Converters. ## Registry roles and permissions Nexus Endpoints are Account-scoped resources, similar to a Namespace. The following roles and permissions are required to manage and view Nexus Endpoints in the Nexus Registry: - Viewing and browsing the full list of Nexus Endpoints in an Account: - Read-only role (or higher) - Managing a Nexus Endpoint (create, update, delete): - Developer role (or higher) and Namespace Admin permission on the Endpoint’s target Namespace ## Runtime access controls The Nexus Registry allows setting Endpoint access policy on each Endpoint. This currently includes an allow list of caller Namespaces that can use the Endpoint at runtime. Endpoint access control policies are enforced at runtime: 1. Caller's Worker authenticates with their Namespace as they do today with mTLS certificates or API keys. This establishes the caller's identity and caller Namespace. 2. Caller Workflow executes a Nexus Operation on a Nexus Endpoint. 3. Endpoint access control policy is enforced, checking if the caller Namespace is in the Endpoint allow list. See [Runtime Access Controls](/nexus/security#runtime-access-controls) and [Configuring Runtime Access Controls](/nexus/registry#configure-runtime-access-controls) for additional details. ## Secure connectivity Nexus Endpoints are only privately accessible from within a Temporal Cloud and mTLS is used for all Nexus communication, including across cloud cells and regions. Workers authenticate to their Namespaces through mTLS or an API key as allowed by their Namespace configuration. See [Nexus Secure Connectivity](/nexus/security#secure-connectivity) for additional details. ## Payload encryption For payload encryption, the DataConverter works the same for a Nexus Operation as it does for other payloads sent between a Worker and Temporal Cloud. See [Nexus Payload Encryption & Data Converter](/nexus/security#payload-encryption-data-converter) for additional details. --- ## Security model - Temporal Cloud **What kind of security does Temporal Cloud provide?** The security model of [Temporal Cloud](/cloud) encompasses applications, data, and the Temporal Cloud platform itself. :::info General platform security For information about the general security features of Temporal, see our [Platform security page](/security). ::: ## Application and data {#applications-and-data} **What is the security model for applications and data in Temporal Cloud?** ### Code execution boundaries Temporal Cloud provides the capabilities of Temporal Server as a managed service; it does not manage your applications or [Workers](/workers#worker). Applications and services written using [Temporal SDKs](/encyclopedia/temporal-sdks) run in your computing environment, such as containers (Docker, Kubernetes) or virtual machines (in any hosting environment). You have full control over how you secure your applications and services. ### Data Converter: Client-side encryption The optional [Data Conversion](/dataconversion) capability of the Temporal Platform lets you transparently encrypt data before it's sent to Temporal Cloud and decrypt it when it comes out. Data Conversion runs on your Temporal Workers and [Clients](/encyclopedia/temporal-sdks#temporal-client); Temporal Cloud cannot see or decrypt your data. If you use this feature, data stored in Temporal Cloud remains encrypted even if the service itself is compromised. By deploying a [Codec Server](/production-deployment/data-encryption) you can securely decrypt data in the [Temporal Web UI](/web-ui) without sharing encryption keys with Temporal. ## The platform {#the-platform} **What is the security model for the Temporal Cloud platform?** ### Namespace isolation The base unit of isolation in a Temporal environment is a [Namespace](/namespaces). Each Temporal Cloud account can have multiple Namespaces, and each Namespace is isolated to ensure your workloads remain secure and performant. #### Authentication Each Namespace is secured with your choice of authentication method: - **mTLS certificates** - Namespace-specific X.509 certificates for mutual TLS authentication - **API keys** - Namespace-scoped API keys for authentication See [API Keys](/cloud/api-keys) and [mTLS Certificates](/cloud/certificates) for more details on configuring authentication for your Namespace. #### Rate limiting Temporal Cloud protects each Namespace with separate rate limits to prevent noisy neighbor problems: - **Actions Per Second (APS)** - Limits the rate of [actions](/best-practices/managing-aps-limits) performed in your Workflows - **Operations Per Second (OPS)** - Limits the rate of all [operations](/references/operation-list) that create load on Temporal Server These per-Namespace rate limits ensure that one Namespace experiencing a traffic spike cannot impact the performance or reliability of other Namespaces, whether those Namespaces belong to a single Temporal Cloud account or separate ones. See [Rate limiting](/cloud/limits) for more information about Temporal Cloud limits, and [Monitoring trends against limits](/cloud/service-health#rps-aps-rate-limits) for monitoring best practices. #### Inter-Namespace communication Namespaces are isolated by default. The only way for Workflows in one Namespace to interact with Workflows in another Namespace is through [Temporal Nexus](/nexus), which provides controlled, secure cross-Namespace communication via Nexus Endpoints. See [Nexus Security](/nexus/security) for details on how Nexus enables secure inter-Namespace communication. #### Logical segregation Temporal Cloud is a multi-tenant service. Namespaces in the same environment are logically segregated. Namespaces do not share data processing or data storage across regional boundaries. ### Private Connectivity Temporal Cloud supports private connectivity to enable you to connect to Temporal Cloud from a secured network. See the [Connectivity](/cloud/connectivity) page for more information and details about using AWS PrivateLink and GCP Private Service Connect with Temporal Cloud. ### Temporal Nexus Like Namespaces, a Nexus Endpoint is an account-scoped resource that is global within a Temporal Cloud account. Any Developer role (or higher) in an account, who is also a Namespace Admin on the endpoint’s target Namespace, can manage (create/update/delete) a Nexus Endpoint. All users with a Read-only role (or higher) in an account, can view and browse the full list of Endpoints. Runtime access from a Workflow in a caller Namespace to a Nexus Endpoint is controlled by an allowlist policy (of caller Namespaces) for each Endpoint in the Nexus API registry. Workers authenticate with Temporal Cloud as they do today with mTLS certificates or API keys as allowed by the Namespace configuration. Nexus requests are sent from the caller’s Namespace to the handler’s Namespace over a secure multi-region mTLS Envoy mesh. For payload encryption, the DataConverter works the same for a Nexus Operation as it does for other payloads sent between a Worker and Temporal Cloud. See [Nexus Security](/nexus/security) for more information. ### Encryption :::tip TLS vs mTLS **TLS** (Transport Layer Security) encrypts data in transit. **mTLS** (mutual TLS) is an authentication method where both client and server present certificates to verify identity. All Temporal Cloud connections use TLS encryption. When you choose "mTLS authentication," you're choosing how to prove your identity, not whether your connection is encrypted. ::: **In transit**: All connections to Temporal Cloud use TLS 1.3 encryption, regardless of your authentication method ([API keys](/cloud/api-keys) or [mTLS certificates](/cloud/certificates)). **At rest**: Data is stored in two locations: an Elasticsearch instance (used when filtering Workflows in SDK clients, the [CLI](/cloud/tcld), or the Web UI) and the core Temporal Cloud persistence layer. Both are encrypted at rest with AES-256-GCM. ### Identity Authentication to Temporal Cloud gRPC endpoints supports two methods: - **[API keys](/cloud/api-keys)**: Identity-based authentication using bearer tokens. Recommended for most use cases. - **[mTLS certificates](/cloud/certificates)**: Mutual TLS authentication using client certificates issued by your CA. Both methods provide secure, encrypted connections to Temporal Cloud. Choose based on your organization's security requirements and key management preferences. For user authentication to the Temporal Cloud UI, see [How to manage SAML authentication with Temporal Cloud](/cloud/saml). ### Access Authorization is managed at the account and Namespace level. Users and systems are assigned one or more preconfigured roles. Users hold [account-level Roles](/cloud/users#account-level-roles) of administrators, developers, and read-only users. Systems and applications processes hold their own distinct roles. ### Monitoring In addition to extensive system monitoring for operational and availability requirements, we collect and monitor audit logs from the AWS environment and all calls to the gRPC API (which is used by the SDKs, CLI, and Web UI). These audit logs can be made available for ingestion into your security monitoring system. ### Testing We contract with a third party to perform a full-scope pentest (with the exception of social engineering) annually. Additionally, we perform targeted third-party and internal testing on an as-needed basis, such as when a significant feature is being released. ### Internal Temporal access We restrict access to production systems to the small team of employees who maintain our production infrastructure. We log all access to production systems; shared accounts are not allowed. Access to all production systems is through SSO, with MFA enabled. Access to our cloud environments is granted only for limited periods of time, with a maximum of 8 hours. (For more information, see the blog post [Rolling out access hours at Temporal](https://temporal.io/blog/rolling-out-access-hours-at-temporal).) All Temporal engineering systems are secured by GitHub credentials, which require both membership in the Temporal GitHub organization and MFA. Access grants are reviewed quarterly. ### Compliance Temporal Technologies is SOC 2 Type 2 certified and compliant with GDPR and HIPAA regulations. Compliance audits are available by request through our [Contact](https://pages.temporal.io/contact-us) page. --- ## Temporal Platform security features :::info General company security For information about the general security habits of Temporal Technologies, see our [trust page](https://trust.temporal.io). ::: :::info Cloud security For information about Temporal Cloud security features, see our [Cloud security page](/cloud/security) ::: The Temporal Platform is designed with security in mind, and there are many features that you can use to keep both the Platform itself and your user's data secure. A secured Temporal Server has its network communication encrypted and has authentication and authorization protocols set up for API calls made to it. Without these, your server could be accessed by unwanted entities. What is documented on this page are the built-in opt-in security measures that come with Temporal. However users may also choose to design their own security architecture with reverse proxies or run unsecured instances inside of a VPC environment. ### Server Samples The https://github.com/temporalio/samples-server repo offers two examples, which are further explained below: - **TLS:** how to configure Transport Layer Security (TLS) to secure network communication with and within a Temporal Service. - **Authorizer:** how to inject a low-level authorizer component that can control access to all API calls. ### Encryption in transit with mTLS Temporal supports Mutual Transport Layer Security (mTLS) as a way of encrypting network traffic between the services of a Temporal Service and also between application processes and a Temporal Service. Self-signed or properly minted certificates can be used for mTLS. mTLS is set in Temporal's [TLS configuration](/references/configuration#tls). The configuration includes two sections such that intra-Temporal Service and external traffic can be encrypted with different sets of certificates and settings: - `internode`: Configuration for encrypting communication between nodes in the Temporal Service. - `frontend`: Configuration for encrypting the Frontend's public endpoints. A customized configuration can be passed using either the [WithConfig](/references/server-options#withconfig) or [WithConfigLoader](/references/server-options#withconfigloader) Server options. See [TLS configuration reference](/references/configuration#tls) for more details. ### Authentication There are a few authentication protocols available to prevent unwanted access such as authentication of servers, clients, and users. ### Servers To prevent spoofing and [MITM attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) you can specify the `serverName` in the `client` section of your respective mTLS configuration. This enables established connections to authenticate the endpoint, ensuring that the server certificate presented to any connecting Client has the appropriate server name in its CN property. It can be used for both `internode` and `frontend` endpoints. More guidance on mTLS setup can be found in [the `samples-server` repo](https://github.com/temporalio/samples-server/tree/main/tls) and you can reach out to us for further guidance. ### Client connections To restrict a client's network access to Temporal Service endpoints you can limit it to clients with certificates issued by a specific Certificate Authority (CA). Use the `clientCAFiles`/ `clientCAData` and `requireClientAuth` properties in both the `internode` and `frontend` sections of the [mTLS configuration](/references/configuration#tls). ### Users To restrict access to specific users, authentication and authorization is performed through extensibility points and plugins as described in the [Authorization](#authorization) section below. #### Authorization :::note Information regarding [`Authorizer`](#authorizer-plugin) and [`ClaimMapper`](#claim-mapper) has been moved to another location. ::: Temporal offers two plugin interfaces for implementing API call authorization: - [`ClaimMapper`](#claim-mapper) - [`Authorizer`](#authorizer-plugin) The authorization and claim mapping logic is customizable, making it available to a variety of use cases and identity schemes. When these are provided the frontend invokes the implementation of these interfaces before executing the requested operation. See https://github.com/temporalio/samples-server/blob/main/extensibility/authorizer for a sample implementation. ### Single sign-on integration Temporal can be integrated with a single sign-on (SSO) experience by using the `ClaimMapper` and `Authorizer` plugins. The default JWT `ClaimMapper` implementation can be used as is or as a base for a custom implementation of a similar plugin. ### Temporal UI To enable SSO authentication in the Temporal UI using environment credentials, you need to configure the UI container with specific environment variables that define your identity provider and OAuth settings. In your docker-compose.yaml, set `TEMPORAL_AUTH_ENABLED=true` to activate authentication. Next, specify the required OAuth credentials and endpoints using environment variables such as: - `TEMPORAL_AUTH_CLIENT_ID` - `TEMPORAL_AUTH_CLIENT_SECRET` - `TEMPORAL_AUTH_PROVIDER_URL` - `TEMPORAL_AUTH_CALLBACK_URL` These values correspond to the client credentials and endpoints provided by your OAuth identity provider (such as Google, Auth0, Okta). When properly configured, Temporal UI will redirect users to your SSO login page and enforce authentication on access. This approach does not require any additional configuration files, making it ideal for containerized environments using secure environment variable injection. ```yaml temporal-ui: container_name: temporal-ui depends_on: - temporal environment: - TEMPORAL_GRPC_ENDPOINT=temporal:7233 - TEMPORAL_ADDRESS=temporal:7233 - TEMPORAL_AUTH_ENABLED=true - TEMPORAL_AUTH_PROVIDER_URL=https://example.com - TEMPORAL_AUTH_CLIENT_ID=xxxxxxxxxxxxxx - TEMPORAL_AUTH_CLIENT_SECRET=xxxxxxxxxxxxxx - TEMPORAL_AUTH_CALLBACK_URL=https://your-domain/auth/sso/callback - TEMPORAL_AUTH_SCOPES=openid profile email image: temporalio/ui:latest networks: - temporal-network ports: - 8080:8080 ``` For more general guidance for configuration, refer to the [Temporal UI README](https://github.com/temporalio/ui?tab=readme-ov-file#configuration). For more details on configuration with Docker, refer to [Temporal UI Config](https://github.com/temporalio/ui/blob/c95265ee6431fd0f6cf78ae06373885d66a8ee0c/server/docker/config-template.yaml). ## Temporal Service plugins {#plugins} The Temporal Service supports some pluggable components. ### What is a ClaimMapper Plugin? {#claim-mapper} The Claim Mapper component is a pluggable component that extracts Claims from JSON Web Tokens (JWTs). This process is achieved with the method `GetClaims`, which translates `AuthInfo` structs from the caller into `Claims` about the caller's roles within Temporal. A `Role` (within Temporal) is a bit mask that combines one or more of the role constants. In the following example, the role is assigned constants that allow the caller to read and write information. ```go role := authorization.RoleReader | authorization.RoleWriter ``` `GetClaims` is customizable and can be modified with the `temporal.WithClaimMapper` server option. Temporal also offers a default JWT `ClaimMapper` for your use. A typical approach is for `ClaimMapper` to interpret custom `Claims` from a caller's JWT, such as membership in groups, and map them to Temporal roles for the user. The subject information from the caller's mTLS certificate can also be a parameter in determining roles. #### `AuthInfo` `AuthInfo` is a struct that is passed to `GetClaims`. `AuthInfo` contains an authorization token extracted from the `authorization` header of the gRPC request. `AuthInfo` includes a pointer to the `pkix.Name` struct. This struct contains an [x.509](https://www.ibm.com/docs/en/ibm-mq/7.5?topic=certificates-distinguished-names) Distinguished Name from the caller's mTLS certificate. #### `Claims` `Claims` is a struct that contains information about permission claims granted to the caller. `Authorizer` assumes that the caller has been properly authenticated, and trusts the `Claims` when making an authorization decision. #### Default JWT ClaimMapper Temporal offers a default JWT `ClaimMapper` that extracts the information needed to form Temporal `Claims`. This plugin requires a public key to validate digital signatures. To get an instance of the default JWT `ClaimMapper`, call `NewDefaultJWTClaimMapper` and provide it with the following: - a `TokenKeyProvider` instance - a `config.Authorization` pointer - a logger The code for the default `ClaimMapper` can also be used to build a custom `ClaimMapper`. #### Token key provider A `TokenKeyProvider` obtains public keys from specified issuers' URIs that adhere to a specific format. The default JWT `ClaimMapper` uses this component to obtain and refresh public keys over time. Temporal provides a `defaultTokenKeyProvider`. This component dynamically obtains public keys that follow the [JWKS format](https://tools.ietf.org/html/rfc7517). It supports formats such as `RSA` and `ECDSA`. ```go provider := authorization.NewDefaultTokenKeyProvider(cfg, logger) ``` :::note `KeySourceURIs` are the HTTP endpoints that return public keys of token issuers in the [JWKS format](https://tools.ietf.org/html/rfc7517). `RefreshInterval` defines how frequently keys should be refreshed. For example, [Auth0](https://auth0.com/) exposes endpoints such as `https://YOUR_DOMAIN/.well-known/jwks.json`. ::: By default, "permissions" is used to name the `permissionsClaimName` value. Configure the plugin with `config.Config.Global.Authorization.JWTKeyProvider`. #### JSON Web Token format The default JWT `ClaimMapper` expects authorization tokens to be formatted as follows: ``` Bearer ``` The Permissions Claim in the JWT Token is expected to be a collection of Individual Permission Claims. Each Individual Permission Claim must be formatted as follows: ``` : ``` These permissions are then converted into Temporal roles for the caller. This can be one of Temporal's four values: - read - write - worker - admin Multiple permissions for the same Namespace are overridden by the `ClaimMapper`. ##### Example of a payload for the default JWT ClaimMapper ``` { "permissions":[ "temporal-system:read", "namespace1:write" ], "aud":[ "audience" ], "exp":1630295722, "iss":"Issuer" } ``` ### What is an Authorizer Plugin? {#authorizer-plugin} The `Authorizer` plugin contains a single `Authorize` method, which is invoked for each incoming API call. `Authorize` receives information about the API call, along with the role and permission claims of the caller. `Authorizer` allows for a wide range of authorization logic, including call target, role/permissions claims, and other data available to the system. #### Configuration The following arguments must be passed to `Authorizer`: - `context.Context`: General context of the call. - `authorization.Claims`: Claims about the roles assigned to the caller. Its intended use is described in the [`Claims`](#claims) section earlier on this page. - `authorization.CallTarget`: Target of the API call. `Authorizer` then returns one of two decisions: - `DecisionDeny`: the requested API call is not invoked and an error is returned to the caller. - `DecisionAllow`: the requested API call is invoked. :::warning Security Warning If you do **not** explicitly configure an `Authorizer`, Temporal uses the default `noopAuthorizer`. This default allows **every** API request, with no authentication or access control. Anyone who can reach your Temporal Server can invoke any API, including sensitive administrative operations. This is **not secure** for production or for any environment that is accessible to untrusted clients (such as over the internet). **To protect your Temporal Server, you must configure an `Authorizer` plugin with a corresponding `ClaimMapper`.** Without this, your deployment is effectively open to anyone with network access. ::: Configure your `Authorizer` with the [`temporal.WithAuthorizer`](/references/server-options#withauthorizer) server option, and your `ClaimMapper` with the [`temporal.WithClaimMapper`](/references/server-options#withclaimmapper) server option. ```go temporalServer, err := temporal.NewServer( temporal.WithAuthorizer(newCustomAuthorizer()), temporal.WithClaimMapper(func(cfg *config.Config) authorization.ClaimMapper { return newCustomClaimMapper(cfg) }), ) ``` #### How to authorize SDK API calls {#authorize-api-calls} When authentication is enabled, you can authorize API calls made to the Frontend Service. The Temporal Server [expects](#authentication) an `authorization` gRPC header with an authorization token to be passed with API calls if [requests authorization](#authorization) is configured. Authorization Tokens may be provided to the Temporal Java SDK by implementing a `io.temporal.authorization.AuthorizationTokenSupplier` interface. The implementation should be used to create `io.temporal.authorization.AuthorizationGrpcMetadataProvider` that may be configured on ServiceStub gRPC interceptors list. The implementation is called for each SDK gRPC request and may supply dynamic tokens. **JWT** One of the token types that may be passed this way are JWT tokens. Temporal Server provides a [default implementation of JWT authentication](#default-jwt-claimmapper). **Example** ```java AuthorizationTokenSupplier tokenSupplier = //your implementation of token supplier () -> "Bearer "; WorkflowServiceStubsOptions serviceStubOptions = WorkflowServiceStubsOptions.newBuilder() //other service stub options .addGrpcMetadataProvider(new AuthorizationGrpcMetadataProvider(tokenSupplier)) .build(); WorkflowServiceStubs service = WorkflowServiceStubs.newServiceStubs(serviceStubOptions); WorkflowClient client = WorkflowClient.newInstance(service); ``` Related read: - [How to secure a Temporal Service](/security) ## Data Converter {#data-converter} Each Temporal SDK provides a [Data Converter](/dataconversion) that can be customized with a custom [Payload Codec](/payload-codec) to encode and secure your data. For details on what data can be encoded, how to secure it, and what to consider when using encryption, see [Data encryption](/production-deployment/data-encryption). #### Codec Server You can use a [Codec Server](/codec-server) with your custom Payload Codec to decode the data you see on your Web UI and CLI locally through remote endpoints. However, ensure that you consider all security implications of [remote data encoding](/remote-data-encoding) before using a Codec Server. For details on how to set up a Codec Server, see [Codec Server setup](/production-deployment/data-encryption#codec-server-setup). --- ## Temporal Platform security Find security information for your Temporal deployment, whether you're using Temporal Cloud or self-hosting. Company Security Learn about Temporal Technologies' general security practices, compliance certifications, and organizational security measures. Temporal Cloud Security Explore the security features of our SaaS offering, including mTLS, end-to-end encryption, and enterprise compliance. Self-Hosted Security Discover how to deploy and secure your own Temporal Platform infrastructure with production-ready best practices. Temporal Cloud Security Whitepaper Learn how Temporal Cloud provides provable security by design - orchestrating encrypted workflows without ever accessing your sensitive data. --- ## Temporal Web UI configuration reference The Temporal Web UI Server uses a configuration file for many of the UI's settings. An example development.yaml file can be found in the [temporalio/ui-server repo](https://github.com/temporalio/ui-server/blob/main/config/development.yaml). Multiple configuration files can be created for configuring specific areas of the UI, such as Auth or TLS. ## auth Configures authorization for the Temporal Server. Settings apply when Auth is enabled. ```yaml auth: enabled: true providers: label: sso # for internal use; in future may expose as button text type: oidc providerUrl: https://accounts.google.com issuerUrl: clientId: xxxxx-xxxx.apps.googleusercontent.com clientSecret: xxxxxxxxxxxxxxxxxxxx callbackUrl: https://xxxx.com:8080/auth/sso/callback scopes: - openid - profile - email ``` ## batchActionsDisabled Prevents the execution of Batch actions. ```yaml batchActionsDisabled: false ``` ## cloudUi Enables the Cloud UI. ```yaml cloudUi: false ``` ## codec Codec Server configuration. ```yaml codec: endpoint: http://your-codec-server-endpoint passAccessToken: false includeCredentials: false decodeEventHistoryDownload: false ``` ## cors The name of the `cors` field stands for Cross-Origin Resource Sharing. Use this field to provide a list of domains that are authorized to access the UI Server APIs. ```yaml cors: cookieInsecure: false allowOrigins: - http://localhost:3000 # used at development by https://github.com/temporalio/ui ``` ## defaultNamespace The default Namespace that the UI loads data for. Defaults to `default`. ```yaml defaultNamespace: default ``` ## disableWriteActions Prevents the user from executing Workflow Actions on the Web UI. This option affects Bulk Actions for Recent Workflows as well as Workflow Actions on the Workflow Details page. ```yaml disableWriteActions: false ``` :::note `disableWriteActions` overrides the configuration values of each individual Workflow Action. Setting this variable to `true` disables all Workflow Actions on the Web UI. ::: ## enableUi Enables the browser UI. This configuration can be set dynamically with the [TEMPORAL_UI_ENABLED](/references/web-ui-environment-variables#temporal_ui_enabled) environment variable. If disabled—that is, set to `false`—the UI server APIs remain available. ```yaml enableUi: true ``` ## feedbackUrl The URL to direct users to when they click on the Feedback button in the UI. If not specified, it defaults to the UI's GitHub Issue page. ```yaml feedbackUrl: https://github.com/temporalio/ui/issues/new/choose ``` ## forwardHeaders Configures headers for forwarding. ```yaml forwardHeaders: - ``` ## hideLogs If enabled, disables any server logs from being printed to the console. ```yaml hideLogs: true ``` ## hideWorkflowQueryErrors Hides any errors resulting from a Query to the Workflow. ```yaml hideWorkflowQueryErrors: false ``` ## notifyOnNewVersion When enabled—that is, when set to `true`—a notification appears in the UI when a newer version of the [Temporal Server](/temporal-service/temporal-server) is available. ```yaml notifyOnNewVersion: true ``` ## port The port used by the Temporal Web UI Server and any APIs. ```yaml port: 8080 ``` ## publicPath The path used by the Temporal Web UI Server and any APIs. ```yaml publicPath: '' ``` ## refreshInterval How often the configuration UI Server reads the configuration file for new values. Currently, only [tls](#tls) configuration values are propagated during a refresh. ```yaml refreshInterval: 1m ``` ## showTemporalSystemNamespace When enabled—that is, when set to `true`—the Temporal System Namespace becomes visible in the UI. The Temporal System Namespace lists Workflow Executions used by the Temporal Platform. ```yaml showTemporalSystemNamespace: false ``` ## temporalGrpcAddress The frontend address for the Temporal Cluster. The default address is localhost (127.0.0.1:7233). ```yaml temporalGrpcAddress: default ``` ## tls Transport Layer Security (TLS) configuration for the Temporal Server. Settings apply when TLS is enabled. ```yaml tls: caFile: ../ca.cert certFile: ../cluster.pem keyFile: ../cluster.key caData: certData: keyData: enableHostVerification: true serverName: tls-server ``` ## workflowCancelDisabled Prevents the user from canceling Workflow Executions from the Web UI. ```yaml workflowCancelDisabled: false ``` ## workflowResetDisabled Prevents the user from resetting Workflows from the Web UI. ```yaml workflowResetDisabled: false ``` ## workflowSignalDisabled Prevents the user from signaling Workflow Executions from the Web UI. ```yaml workflowSignalDisabled: false ``` ## workflowTerminateDisabled Prevents the user from terminating Workflow Executions from the Web UI. ```yaml workflowTerminateDisabled: false ``` --- ## Temporal Web UI environment variables reference You can use environment variables to dynamically alter the configuration of your Temporal Web UI. These can be used in many environments, such as with Docker. For example: ``` docker run\ -e TEMPORAL_ADDRESS=127.0.0.1:7233\ -e TEMPORAL_UI_PORT=8080\ -e TEMPORAL_UI_PUBLIC_PATH=path/to/webui\ -e TEMPORAL_UI_ENABLED=true\ -e TEMPORAL_BANNER_TEXT="Some banner text"\ -e TEMPORAL_CLOUD_UI=false\ -e TEMPORAL_DEFAULT_NAMESPACE=default\ -e TEMPORAL_FEEDBACK_URL=https://feedback.here\ -e TEMPORAL_NOTIFY_ON_NEW_VERSION=true\ -e TEMPORAL_CONFIG_REFRESH_INTERVAL=0s\ -e TEMPORAL_SHOW_TEMPORAL_SYSTEM_NAMESPACE=false\ -e TEMPORAL_DISABLE_WRITE_ACTIONS=false\ -e TEMPORAL_AUTH_ENABLED=true\ -e TEMPORAL_AUTH_TYPE=oidc\ -e TEMPORAL_AUTH_PROVIDER_URL=https://accounts.google.com\ -e TEMPORAL_AUTH_ISSUER_URL=https://accounts.google.com\ -e TEMPORAL_AUTH_CLIENT_ID=xxxxx-xxxx.apps.googleusercontent.com\ -e TEMPORAL_AUTH_CLIENT_SECRET=xxxxxxxxxxxxxxx\ -e TEMPORAL_AUTH_CALLBACK_URL=https://xxxx.com:8080/auth/sso/callback\ -e TEMPORAL_AUTH_SCOPES=openid,email,profile\ -e TEMPORAL_TLS_CA=../ca.cert\ -e TEMPORAL_TLS_CERT=../cluster.pem\ -e TEMPORAL_TLS_KEY=../cluster.key\ -e TEMPORAL_TLS_ENABLE_HOST_VERIFICATION=true\ -e TEMPORAL_TLS_SERVER_NAME=tls-server\ -e TEMPORAL_CODEC_ENDPOINT=https://codec.server\ -e TEMPORAL_CODEC_PASS_ACCESS_TOKEN=false\ -e TEMPORAL_CODEC_INCLUDE_CREDENTIALS=false\ -e TEMPORAL_HIDE_LOGS=false\ temporalio/ui: ``` The environment variables are defined in the [UI server configuration template file](https://github.com/temporalio/ui-server/blob/main/config/docker.yaml) and described in more detail below. ## `TEMPORAL_ADDRESS` The [Frontend Service](/temporal-service/temporal-server#frontend-service) address for the Temporal Cluster. This variable can be set [in the base configuration file](/references/web-ui-configuration#temporalgrpcaddress) using `temporalGrpcAddress`. This variable is required for setting other environment variables. ## `TEMPORAL_UI_PORT` The port used by the Temporal WebUI Server and the HTTP API. This variable is needed for `TEMPORAL_OPENAPI_ENABLED` and all auth-related settings to work properly. ## `TEMPORAL_UI_PUBLIC_PATH` Stores a value such as "" or "/custom-path" that allows the UI to be served from a subpath. ## `TEMPORAL_UI_ENABLED` Enables or disables the [browser UI](/references/web-ui-configuration#enableui) for the Temporal Cluster. Enabling the browser UI allows the Server to be accessed from your web browser. If disabled, the server cannot be viewed on the web, but the UI server APIs remain available for use. ## `TEMPORAL_BANNER_TEXT` Provides banner text to display on the Web UI. ## `TEMPORAL_CLOUD_UI` If enabled, use the alternate UI from Temporal Cloud. ## `TEMPORAL_DEFAULT_NAMESPACE` The default [Namespace](/namespaces) that the Web UI opens first. ## `TEMPORAL_FEEDBACK_URL` The URL that users are directed to when they click the Feedback button in the UI. If not specified, this variable defaults to the UI's GitHub Issue page. ## `TEMPORAL_NOTIFY_ON_NEW_VERSION` Enables or disables notifications that appear in the UI whenever a newer version of the Temporal Cluster is available. ## `TEMPORAL_CONFIG_REFRESH_INTERVAL` Determines how often the UI Server reads the configuration file for new values. ## `TEMPORAL_SHOW_TEMPORAL_SYSTEM_NAMESPACE` If enabled, shows the System Namespace that handles internal Temporal Workflows in the Web UI. ## `TEMPORAL_DISABLE_WRITE_ACTIONS` Disables any button in the UI that allows the user to modify Workflows or Activities. ## `TEMPORAL_AUTH_ENABLED` Enables or disables Web UI authentication and authorization methods. When enabled, the Web UI will use the provider information in the [UI configuration file](/references/web-ui-configuration#auth) to verify the identity of users. All auth-related variables can be defined when `TEMPORAL_AUTH_ENABLED` is set to "true". Disabling the variable will retain given values. ## `TEMPORAL_AUTH_TYPE` Specifies the type of authentication. Defaults to `oidc`. ## `TEMPORAL_AUTH_PROVIDER_URL` The .well-known IDP discovery URL for authentication and authorization. This can be set as in the UI server configuration with [auth](/references/web-ui-configuration#auth). ## `TEMPORAL_AUTH_ISSUER_URL` The URL for the authentication or authorization issuer. This value is only needed when the issuer differes from the auth provider URL. ## `TEMPORAL_AUTH_CLIENT_ID` The client ID used for authentication or authorization. This value is a required parameter. ## `TEMPORAL_AUTH_CLIENT_SECRET` The client secret used for authentication and authorization. Client Secrets are used by the oAuth Client for authentication. ## `TEMPORAL_AUTH_CALLBACK_URL` The callback URL used by Temporal for authentication and authorization. Callback URLs are invoked by IDP after user has finished authenticating in IDP. ## `TEMPORAL_AUTH_SCOPES` Specifies a set of scopes for auth. Typically, this is `openid`, `profile`, `email`. ## `TEMPORAL_TLS_CA` The path for the Transport Layer Security (TLS) Certificate Authority file. In order to [configure TLS for your server](/references/web-ui-configuration#tls), you'll need a CA certificate issued by a trusted Certificate Authority. Set this variable to properly locate and use the file. ## `TEMPORAL_TLS_CERT` The path for the Transport Layer Security (TLS) Certificate. In order to [configure TLS for your server](/references/web-ui-configuration#tls), you'll need a self-signed certificate. Set the path to allow the environment to locate and use the certificate. ## `TEMPORAL_TLS_KEY` The path for the Transport Layer Security (TLS) [key file](/references/web-ui-configuration#tls). A key file is used to create private and public keys for encryption and signing. Together, these keys are used to create certificates. ## `TEMPORAL_TLS_CA_DATA` Stores the data for a TLS CA file. This variable can be used instead of providing a path for `TEMPORAL_TLS_CA`. ## `TEMPORAL_TLS_CERT_DATA` Stores the data for a TLS cert file. This variable can be used instead of providing a path for `TEMPORAL_TLS_CERT`. ## `TEMPORAL_TLS_KEY_DATA` Stores the data for a TLS key file. This variable can be used instead of providing a path for `TEMPORAL_TLS_KEY`. ## `TEMPORAL_TLS_ENABLE_HOST_VERIFICATION` Enables or disables [Transport Layer Security (TLS) host verification](/references/web-ui-configuration#tls). When enabled, TLS checks the Host Server to ensure that files are being sent to and from the correct URL. ## `TEMPORAL_TLS_SERVER_NAME` The server on which to operate [Transport Layer Security (TLS) protocols](/references/web-ui-configuration#tls). TLS allows the current server to transmit encrypted files to other URLs without having to reveal itself. Because of this, TLS operates a go-between server. ## `TEMPORAL_CODEC_ENDPOINT` The endpoint for the [Codec Server](/codec-server), if configured. ## `TEMPORAL_CODEC_PASS_ACCESS_TOKEN` Specifies whether to send a JWT access token as ‘authorization' header in requests with the Codec Server. ## `TEMPORAL_CODEC_INCLUDE_CREDENTIALS` Specifies whether to include credentials along with requests to the Codec Server. ## `TEMPORAL_FORWARD_HEADERS` Forward-specified HTTP headers to direct from HTTP API requests to the Temporal gRPC backend. This is a comma-delimited list of the HTTP headers to be forwarded. ## `TEMPORAL_HIDE_LOGS` If enabled, does not print logs from the Temporal Service. --- ## Temporal Web UI The Temporal Web UI provides users with Workflow Execution state and metadata for debugging purposes. It ships with every [Temporal CLI](/cli) release and [Docker Compose](https://github.com/temporalio/docker-compose) update and is available with [Temporal Cloud](/cloud). You can configure the Temporal Web UI to work in your own environment. See the [UI configuration reference](/references/web-ui-configuration). Web UI open source repos: - [temporalio/ui](https://github.com/temporalio/ui) - [temporalio/ui-server](https://github.com/temporalio/ui-server) ## Namespaces All Namespaces in your self-hosted Temporal Service or Temporal Cloud account are listed under **Namespaces** in the left section of the window. You can also switch Namespaces from the Workflows view by selecting from the Namespace switcher at the top right corner of the window. After you select a Namespace, the Web UI shows the Recent Workflows page for that Namespace. In Temporal Cloud, users can access only the Namespaces that they have been granted access to. For details, see [Namespace-level permissions](/cloud/users#namespace-level-permissions). ## Workflows The main Workflows page displays a table of all Workflow Executions within the retention period. Users can list Workflow Executions by any of the following: - [Status](/workflow-execution#workflow-execution-status) - [Workflow ID](/workflow-execution/workflowid-runid#workflow-id) - [Workflow Type](/workflow-definition#workflow-type) - Start time - End time - Any other Default or Custom [Search Attribute](/search-attribute) that uses [List Filter](/list-filter) For start time and end time, users can set their preferred date and time format as one of the following: - UTC - Local - Relative Select a Workflow Execution to view the Workflow Execution's History, Workers, Relationships, pending Activities and Nexus Operations, Queries, and Metadata. ### Saved Views {#saved-views} Saved Views let you save and reuse your frequently used visibility queries in the Temporal Web UI. Instead of recreating complex filters every time, you can save them once and apply them with a single click. Saved Views are stored locally in your browser and are available to you whenever you use the Temporal Web UI in this browser. Each user will have their own private collection. #### Apply a Saved View By default, The Workflows page has several default Saved Views. You can also create your own Saved Views. Click the name of a Saved View in the list to display the corresponding Workflows that match the query. The Workflow List page will refresh with the results of the Saved View. #### Create a Saved View You can create a new Saved View from the Workflows page. 1. Create a Saved View by using the filter UI to build your criteria, or you can use the raw query editor to write custom query strings. 1. Your new view will appear in the Custom Views list as New View. Click the Save as New button to bring up the Save as View window. Name your Saved View. Names must be unique to each user and can contain a max of 255 characters. 1. Click Save. Your new view will appear in the Custom Views list You can create up to 20 Saved Views. When you reach this limit, you'll need to delete some Saved Views before you can save new ones. #### Make Temporary Changes to a Saved View query You can modify a Saved View temporarily without changing the saved criteria. 1. Select the Saved View you want to change. 1. Adjust the UI filters as needed. 1. The Workflows page will refresh with the results of the new query, without changing the Saved View. 1. If you want to keep your temporary changes, you can: - Click Save, which will replace the original Saved View with your modifications. - Click Edit, modify the name, and click Save, which will replace the original Saved View with your modifications and change the name. - Click Edit, modify the name, and click Create New, which will create a new Saved View with your new settings and a new name. #### Rename a Saved View Query You can rename an existing Saved View from the Workflows page. 1. Select the Saved View you want to change. 1. Click Edit. 1. In the Edit View dialog box, enter a new name for the Saved View. 1. Click Save to apply your changes and rename the existing Saved View, or click Create Copy to create a new Saved View with the new name. #### Deleting Saved Views You can delete a Saved View from the Workflows page, because it is no longer useful, or to create room for new Saved Views. 1. Select the Saved View you want to delete. You can only delete queries you’ve created; you cannot delete the system defaults. 1. Click “Edit” and then "Delete this Saved View". :::note Deleting Saved Views is permanent Deleted queries cannot be recovered, so make sure you won't need them again. If you accidentally delete a Saved List, you will need to recreate it. ::: #### Share a Saved View You can share a Saved View as a URL. 1. Select the Saved View you want to share. 1. Click the “Share” button to copy the URL for this Saved View to the clipboard. You can also copy the URL directly from the browser. :::note Saved Views and time Saved Views that use relative times will be shared with absolute time. ::: ## Task Failures View {#task-failures-view} The Task Failures view is a pre-defined Saved View that displays Workflows that have a Workflow Task failure. These Workflows are still running, but one of their Tasks has failed or timed out. The details of the Task Failures view displays the Workflow's ID, the Run ID, and the Workflow type. Clicking on any of the links in the details opens the Workflow page for that Workflow. On this page, you will find more information about the Task that failed and remaining pending tasks. You can also cancel the Workflow by clicking the Request Cancellation button on this page. Our system monitors Workflow task execution patterns in real-time. When a Workflow experiences five consecutive task failures or timeouts, it gets automatically flagged. The moment the Workflow recovers with a successful task, the flag clears. This smart threshold filters out minor glitches while surfacing Workflows with genuine problems. ### Activating Task Failures View {#activate-task-failures-view} This is enabled by default for Temporal Cloud users. If you're self-hosting Temporal, you'll need to update the `system.numConsecutiveWorkflowTaskProblemsToTriggerSearchAttribute` [dynamic config](/references/dynamic-configuration). Here's an example of how to make the config update for the dev server: ```command temporal server start-dev \ --dynamic-config-value system.numConsecutiveWorkflowTaskProblemsToTriggerSearchAttribute=5 ``` `numConsecutiveWorkflowTaskProblemsToTriggerSearchAttribute` is the number of consecutive Workflow Task Failures required to trigger the `TemporalReportedProblems` search attribute. The default value is 5. If adding this search attribute causes strain on the visibility system, consider increasing this number. To turn off the feature for a Namespace, set `numConsecutiveWorkflowTaskProblemsToTriggerSearchAttribute` to 0. ## History A Workflow Execution History is a view of the [Events](/workflow-execution/event#event) and Event fields within the Workflow Execution. Approximately [40 different Events](/references/events) can appear in a Workflow Execution's Event History. The top of the page lists the following execution metadata: - Start Time, Close Time and Duration - [Run Id](/workflow-execution/workflowid-runid#run-id) - [Workflow Type](/workflow-definition#workflow-type) - [Task Queue](/task-queue) - Parent and Parent ID - SDK - [State Transitions](/workflow-execution#state-transition) - [Billable Actions Count](/cloud/actions#actions-in-workflows) (Temporal Cloud only) The Input and Results section displays the function arguments and return values for debugging purposes. Results are not available until the Workflow finishes. The History tab has the following views: - Timeline: A chronological or reverse-chronological order of events with a summary. Clicking into an Event displays all details for that Event. - All: View all History Events. - Compact: A logical grouping of Activities, Signals and Timers. - JSON: The full JSON code for the workflow. ### Download Event History The entire Workflow Execution Event History, in JSON format, can be downloaded from this section. ### Workflow Actions Workflow Executions can request a Cancellation, send a Signal or Update, or Reset and Terminate directly from the UI. Start a new Workflow Execution with pre-filled values with the Start Workflow Like This One button. ### Relationships Displays the full hierarchy of a Workflow Execution with all parent and child nodes displayed in a tree. ### Workers Displays the Workers currently polling on the Workflow Task Queue with a count. If no Workers are polling, an error displays. ### Pending Activities Displays a summary of recently active and/or pending Activity Executions. Clicking a pending Activity directs the user to the Pending Activities tab to view details. ### Call Stack The screen shows the captured result from the [\_\_stack_trace](/sending-messages#stack-trace-query) Query. The Query is performed when the tab is selected. It works only if a Worker is running and available to return the call stack. The call stack shows each location where Workflow code is waiting. ### Queries Lists all Queries sent to the Workflow Execution. ### Metadata Displays User Metadata including static Workflow Summary and Details and dynamic Current Details. Lists all Events with User Metadata data to give you a human-readable log of what's happening in your Workflow. ## Schedules On Temporal Cloud and self-hosted Temporal Service Web UI, the Schedules page lists all the [Schedules](/schedule) created on the selected Namespace. Click a Schedule to see details, such as configured frequency, start and end times, and recent and upcoming runs. :::tip Setting Schedules with Strings Temporal Workflow Schedule Cron strings follow this format: ``` ┌───────────── minute (0 - 59) │ ┌───────────── hour (0 - 23) │ │ ┌───────────── day of the month (1 - 31) │ │ │ ┌───────────── month (1 - 12) │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday) │ │ │ │ │ * * * * * ``` ::: To read more about Schedules, explore these links: ### Settings On Temporal Cloud, **Settings** is visible only to Account Owner and Global Admin [roles](/cloud/users#account-level-roles). Click **Settings** to see and manage the list of users in your account and to set up integrations such as [Observability](/cloud/metrics) and [Audit logging](/cloud/audit-logs). On a self-hosted Temporal Service, manage your users, metrics, and logging in your [server configuration](/references/configuration). ### Archive On a self-hosted Temporal Service, Archive shows [Archived](/temporal-service/archival) data of your Workflow Executions on the Namespace. To see data in your self-hosted Temporal Service, you must have [Archival set up and configured](/self-hosted-guide/archival). For information and details on the Archive feature in Temporal Cloud, contact your Temporal representative. ### Codec Server The Web UI can use a [Codec Server](/codec-server) with a custom Data Converter to decode inputs and return values. For details, see [Securing your data](/production-deployment/data-encryption). The UI supports a [Codec Server endpoint](/production-deployment/data-encryption#web-ui). For details on setting the Codec Server endpoint, see [Codec Server setup](/production-deployment/data-encryption#codec-server-setup). --- ## Glossary The following terms are used in [Temporal Platform](/temporal) documentation. #### [Action](/cloud/pricing#action) An Action is the fundamental pricing unit in Temporal Cloud. Temporal Actions are the building blocks for Workflow Executions. When you execute a Temporal Workflow, its Actions create the ongoing state and progress of your Temporal Application. #### [Actions Per Second (APS)](/cloud/limits#actions-per-second) APS, or Actions per second, is specific to Temporal Cloud. Each Temporal Cloud Namespace enforces a rate limit, which is measured in Actions per second (APS). This is the number of Actions, such as starting or signaling a Workflow, that can be performed per second within a specific Namespace. #### [Activity](/activities) In day-to-day conversation, the term "Activity" denotes an Activity Type, Activity Definition, or Activity Execution. #### [Activity Definition](/activity-definition) An Activity Definition is the code that defines the constraints of an Activity Task Execution. #### [Activity Execution](/activity-execution) An Activity Execution is the full chain of Activity Task Executions. #### [Activity Heartbeat](/encyclopedia/detecting-activity-failures#activity-heartbeat) An Activity Heartbeat is a ping from the Worker that is executing the Activity to the Temporal Service. Each ping informs the Temporal Service that the Activity Execution is making progress and the Worker has not crashed. #### [Activity Id](/activity-execution#activity-id) A unique identifier for an Activity Execution. #### [Activity Task](/tasks#activity-task) An Activity Task contains the context needed to make an Activity Task Execution. #### [Activity Task Execution](/tasks#activity-task-execution) An Activity Task Execution occurs when a Worker uses the context provided from the Activity Task and executes the Activity Definition. #### [Activity Type](/activity-definition#activity-type) An Activity Type is the mapping of a name to an Activity Definition. #### [Archival](/temporal-service/archival) Archival is a feature specific to a Self-hosted Temporal Service that automatically backs up Event Histories from Temporal Service persistence to a custom blob store after the Closed Workflow Execution retention period is reached. #### [Asynchronous Activity Completion](/activity-execution#asynchronous-activity-completion) Asynchronous Activity Completion occurs when an external system provides the final result of a computation, started by an Activity, to the Temporal System. #### [Audit Logging](/cloud/audit-logs) Audit Logging is a feature that provides forensic access information for accounts, users, and Namespaces. #### [Authorizer Plugin](/self-hosted-guide/security#authorizer-plugin) The `Authorizer` plugin contains a single `Authorize` method, which is invoked for each incoming API call. `Authorize` receives information about the API call, along with the role and permission claims of the caller. #### [Availability Zone](/cloud/high-availability) An availability zone is a part of the Temporal system where tasks or operations are handled and executed. This design helps manage workloads and ensure tasks are completed. Temporal Cloud Namespaces are automatically distributed across three availability zones, offering the 99.9% uptime outlined in our Cloud [SLA](/cloud/sla). #### [Child Workflow](/child-workflows) A Child Workflow Execution is a Workflow Execution that is spawned from within another Workflow. #### [Claim Mapper](/self-hosted-guide/security#claim-mapper) The Claim Mapper component is a pluggable component that extracts Claims from JSON Web Tokens (JWTs). #### [Codec Server](/codec-server) A Codec Server is an HTTP server that uses your custom Payload Codec to encode and decode your data remotely through endpoints. #### [Command](/workflow-execution#command) A Command is a requested action issued by a Worker to the Temporal Service after a Workflow Task Execution completes. #### [Continue-As-New](/workflow-execution/continue-as-new) Continue-As-New is the mechanism by which all relevant state is passed to a new Workflow Execution with a fresh Event History. #### [Core SDK](https://temporal.io/blog/why-rust-powers-core-sdk) The Core SDK is a shared common core library used by several Temporal SDKs. Written in Rust, the Core SDK provides complex concurrency management and state machine logic among its standout features. Centralizing development enables the Core SDK to support quick and reliable deployment of new features to existing SDKs, and to more easily add new SDK languages to the Temporal ecosystem. #### [Custom Data Converter](/default-custom-data-converters#custom-data-converter) A custom Data Converter extends the default Data Converter with custom logic for Payload conversion or Payload encryption. #### [Data Converter](/dataconversion) A Data Converter is a Temporal SDK component that serializes and encodes data entering and exiting a Temporal Service. #### [Default Data Converter](/default-custom-data-converters#default-data-converter) The default Data Converter is used by the Temporal SDK to convert objects into bytes using a series of Payload Converters. #### [Delay Workflow Execution](/workflow-execution/timers-delays) Start Delay determines the amount of time to wait before initiating a Workflow Execution. If the Workflow receives a Signal-With-Start or Update-With-Start during the delay, it dispatches a Workflow Task and the remaining delay is bypassed. #### [Dual Visibility](/dual-visibility) Dual Visibility is a feature, specific to a Self-hosted Temporal Service, that lets you set a secondary Visibility store in your Temporal Service to facilitate migrating your Visibility data from one database to another. #### [Durable Execution](/temporal#durable-execution) Durable Execution in the context of Temporal refers to the ability of a Workflow Execution to maintain its state and progress even in the face of failures, crashes, or server outages. #### [Dynamic Handler](/dynamic-handler) Dynamic Handlers are Workflows, Activities, Signals, or Queries that are unnamed and invoked when no other named handler matches the call from the Server at runtime. #### [Event](/workflow-execution/event#event) Events are created by a Temporal Service in response to external occurrences and Commands generated by a Workflow Execution. #### [Event History](/workflow-execution/event#event-history) An append-only log of Events that represents the full state a Workflow Execution. #### [Failback](/cloud/high-availability) After Temporal Cloud has resolved an outage or incident involving a failover, a failback process shifts Workflow Execution processing back to the original region that was active before the incident. #### [Failover](/cloud/high-availability) A failover shifts Workflow Execution processing from an active Temporal Namespace region to a standby Temporal Namespace region during outages or other incidents. Standby Namespace regions use replication to duplicate data and prevent data loss during failover. #### [Failure](/temporal#failure) Temporal Failures are representations of various types of errors that occur in the system. #### [Failure Converter](/failure-converter) A Failure Converter converts error objects to proto Failures and back. #### [Failures](/references/failures) A Failure is Temporal's representation of various types of errors that occur in the system. #### [Frontend Service](/temporal-service/temporal-server#frontend-service) The Frontend Service is a stateless gateway service that exposes a strongly typed Proto API. The Frontend Service is responsible for rate limiting, authorizing, validating, and routing all inbound calls. #### [General Availability](/evaluate/development-production-features/release-stages#general-availability) Learn more about the General Availability release stage #### [Global Namespace](/global-namespace) A Global Namespace is a Namespace that duplicates data from an active [Temporal Service](#temporal-cluster) to a standby Service using the replication to keep both Namespaces in sync. Global Namespaces are designed to respond to service issues like network congestion. When service to the primary Cluster is compromised, a [failover](#failover) transfers control from the active to the standby cluster. #### [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) A Heartbeat Timeout is the maximum time between Activity Heartbeats. #### [High Availability](/cloud/high-availability/) High availability ensures that a system remains operational with minimal downtime. It achieves this with redundancy and failover mechanisms that handle failures, so end-users remain unaware of incidents. Temporal Cloud guarantees this high availability with its Service Level Agreements ([SLA](/cloud/sla)) #### [High Availability features](/cloud/high-availability#high-availability-features) High Availability features automatically synchronize your data between a primary Namespace and its replica, keeping them in sync. In case of an incident or an outage, Temporal will automatically failover your Namespace from the primary to the replica. This supports high levels of business continuity, allowing Workflow Executions to continue with minimal interruptions or data loss. #### [History Service](/temporal-service/temporal-server#history-service) The History Service is responsible for persisting Workflow Execution state and determining what to do next to progress the Workflow Execution through History Shards. #### [History Shard](/temporal-service/temporal-server#history-shard) A History Shard is an important unit within a Temporal Service by which the scale of concurrent Workflow Execution throughput can be measured. #### [Idempotency](/activity-definition#idempotency) An "idempotent" approach avoids process duplication that could withdraw money twice or ship extra orders by mistake. Idempotency keeps operations from producing additional effects, protecting your processes from accidental or repeated actions, for more reliable execution. Design your activities to succeed once and only once. Run-once actions maintain data integrity and prevent costly errors. #### [Isolation Domain](/cloud/high-availability) An isolation domain is a defined area within Temporal Cloud's infrastructure. It helps contain failures and prevents them from spreading to other parts of the system, providing redundancy and fault tolerance. #### [List Filter](/list-filter) A List Filter is the SQL-like string that is provided as the parameter to an advanced Visibility List API. #### [Local Activity](/local-activity) A Local Activity is an Activity Execution that executes in the same process as the Workflow Execution that spawns it. #### [Matching Service](/temporal-service/temporal-server#matching-service) The Matching Service is responsible for hosting external Task Queues for Task dispatching. #### [Memo](/workflow-execution#memo) A Memo is a non-indexed user-supplied set of Workflow Execution metadata that is returned when you describe or list Workflow Executions. #### [Multi-Cluster Replication](/self-hosted-guide/multi-cluster-replication) Multi-Cluster Replication is a feature which asynchronously replicates Workflow Executions from active Clusters to other passive Clusters, for backup and state reconstruction. #### [Multi-cloud Replication](/cloud/high-availability/enable) Multi-cloud Replication replicates Workflows and metadata to a different cloud provider (AWS or GCP). This is particularly beneficial for organizations required to be highly available across regions for compliance purposes. #### [Multi-region Replication](/cloud/high-availability/enable) Multi-region Replication replicates Workflows and metadata to a different region that is not co-located with the primary Namespace. This is particularly beneficial for organizations with multi-regional architectures or those required to be highly available across regions for compliance purposes. #### [Namespace](/namespaces) A Namespace is a unit of isolation within the Temporal Platform. #### Nexus Async Completion Callback A Nexus Async Completion Callback is the completion callback for an asynchronous Nexus Operation. #### Nexus Endpoint A Nexus Endpoint is a reverse proxy that can serve one or more Nexus Services. It routes Nexus requests to a target Namespace and Task Queue, that a Nexus Worker is polling. This allows service providers to present a clean service contract and hide the underlying implementation, which may consist of many internal Workflows. Multiple Nexus Endpoints can target the same Namespace, and over time a Nexus Endpoint will be able to span multiple Namespaces with service routing rules. #### Nexus Machinery Temporal has built-in Nexus Machinery to guarantee at-least-once execution of Nexus Operations with state-machine-based invocation and completion callbacks. The Nexus Machinery uses [Nexus RPC](/glossary#nexus-rpc), a protocol designed with Durable Execution in mind, to communicate across Namespace boundaries. Caller Workflows and Nexus handlers don't have to use Nexus RPC directly, since the Temporal SDK provides a streamlined developer experience to build, run, and use Nexus Services. #### Nexus Operation An arbitrary-duration operation that may be synchronous or asynchronous, short-lived, or long-lived, and used to connect durable executions within and across Namespaces, clusters, regions, and clouds. Unlike a traditional RPC, an asynchronous Nexus Operation has an operation token that can be used to re-attach to a long-lived Nexus Operation, for example, one backed by a Temporal Workflow. Nexus Operations support a uniform interface to get the status of an operation or its result, receive a completion callback, or cancel the operation – all of which are fully integrated into the Temporal Platform. #### Nexus Operation Events Nexus Operations Events are history events that surface in the Caller Workflow to indicate the state of an Operation including `Nexus Operation Scheduled`, `Nexus Operation Started`, `Nexus Operation Completed`. #### Nexus Operation Handler The Nexus handler code in a Temporal Worker typically created using Temporal SDK builder functions that make it easy to abstract Temporal primitives and expose a clean service contract for others to use. #### Nexus Registry The Nexus Registry manages Nexus Endpoints and provides lookup services for resolving Nexus requests at runtime. In the open source version of Temporal, the Registry is scoped to a Cluster, while in Temporal Cloud, it is scoped to an Account. Endpoint names must be unique within the Registry. When the Temporal Service dispatches a Nexus request, it resolves the request's Endpoint to a Namespace and Task Queue through the Registry. #### [Nexus RPC](https://github.com/nexus-rpc/api/blob/main/SPEC.md) Nexus RPC is a protocol designed with durable execution in mind. It supports arbitrary-duration Operations that extend beyond a traditional RPC — a key underpinning to connect durable executions within and across Namespaces, clusters, regions, and cloud boundaries. #### Nexus Service A Nexus Service is a named collection of arbitrary-duration Nexus Operations that provide a microservice contract suitable for sharing across team and application boundaries. Nexus Services are registered with a Temporal Worker that is polling a Nexus Endpoint's target Namespace and Task Queue. #### Nexus Service Contract A common code package, schema, or documentation that a Caller can use to obtain Service and Operation names as associated input/output types a Service will accept for a given Operation. #### [Parent Close Policy](/parent-close-policy) If a Workflow Execution is a Child Workflow Execution, a Parent Close Policy determines what happens to the Workflow Execution if its Parent Workflow Execution changes to a Closed status (Completed, Failed, Timed out). #### [Payload](/dataconversion#payload) A Payload represents binary data such as input and output from Activities and Workflows. #### [Payload Codec](/payload-codec) A Payload Codec transforms an array of Payloads into another array of Payloads. #### [Payload Converter](/payload-converter) A Payload Converter serializes data, converting objects or values to bytes and back. #### [Pre-release](/evaluate/development-production-features/release-stages#pre-release) Learn more about the Pre-release stage #### [Public Preview](/evaluate/development-production-features/release-stages#public-preview) Learn more about the Public Preview release stage #### [Query](/sending-messages#sending-queries) A Query is a synchronous operation that is used to report the state of a Workflow Execution. #### [Remote data encoding](/remote-data-encoding) Remote data encoding is using your custom Data Converter to decode (and encode) your Payloads remotely through endpoints. #### [Replication Lag](/cloud/high-availability/monitoring#replication-lag-metric) The transmission delay of Workflow updates and history events from the active region to the standby region. #### [Requests Per Second (RPS)](/references/dynamic-configuration#service-level-rps-limits) RPS, or Requests per second, is used in the Temporal Service (both in self-hosted Temporal and Temporal Cloud). This is a measure that controls the rate of requests at the service level, such as the Frontend, History, or Matching Service. #### [Reset](/workflow-execution/event#reset) A Reset terminates a Workflow Execution, removes the progress in the Event History up to the reset point, and then creates a new Workflow Execution with the same Workflow Type and Id to continue. #### [Retention Period](/temporal-service/temporal-server#retention-period) A Retention Period is the amount of time a Workflow Execution Event History remains in the Temporal Service's persistence store. #### [Retry Policy](/encyclopedia/retry-policies) A Retry Policy is a collection of attributes that instructs the Temporal Server how to retry a failure of a Workflow Execution or an Activity Task Execution. #### [Run Id](/workflow-execution/workflowid-runid#run-id) A Run Id is a globally unique, platform-level identifier for a Workflow Execution. #### [Same-region Replication](/cloud/high-availability/enable) Same-region Replication replicates Workflows and metadata to an isolation domain within the same region as the primary Namespace. It provides a reliable failover mechanism while maintaining deployment simplicity. #### [Schedule](/schedule) A Schedule enables the scheduling of Workflow Executions. #### [Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout) A Schedule-To-Close Timeout is the maximum amount of time allowed for the overall Activity Execution, from when the first Activity Task is scheduled to when the last Activity Task, in the chain of Activity Tasks that make up the Activity Execution, reaches a Closed status. #### [Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout) A Schedule-To-Start Timeout is the maximum amount of time that is allowed from when an Activity Task is placed in a Task Queue to when a Worker picks it up from the Task Queue. #### [Search Attribute](/search-attribute) A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata. #### [Side Effect](/workflow-execution/event#side-effect) A Side Effect is a way to execute a short, non-deterministic code snippet, such as generating a UUID, that executes the provided function once and records its result into the Workflow Execution Event History. #### [Signal](/sending-messages#sending-signals) A Signal is an asynchronous request to a Workflow Execution. #### [Signal-With-Start](/sending-messages#signal-with-start) Signal-With-Start starts and Signals a Workflow Execution, or just Signals it if it already exists. #### [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout) A Start-To-Close Timeout is the maximum time allowed for a single Activity Task Execution. #### [State Transition](/workflow-execution#state-transition) A State Transition is a unit of progress by a Workflow Execution. #### [Sticky Execution](/sticky-execution) A Sticky Execution is a when a Worker Entity caches the Workflow Execution Event History and creates a dedicated Task Queue to listen on. #### [Task](/tasks#task) A Task is the context needed to make progress with a specific Workflow Execution or Activity Execution. #### [Task Queue](/task-queue) A Task Queue is a first-in, first-out queue that a Worker Process polls for Tasks. #### [Task Routing](/task-routing) Task Routing is when a Task Queue is paired with one or more Worker Processes, primarily for Activity Task Executions. #### [Task Token](/activity-execution#task-token) A Task Token is a unique identifier for an Activity Task Execution. #### [Temporal](/temporal) Temporal is a scalable and reliable runtime for Reentrant Processes called Temporal Workflow Executions. #### [Temporal Application](/temporal#temporal-application) A Temporal Application is a set of Workflow Executions. #### [Temporal CLI](/cli) {#cli} The Temporal CLI is the most recent version of Temporal's command-line tool. #### [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) A Temporal Client, provided by a Temporal SDK, provides a set of APIs to communicate with a Temporal Service. #### [Temporal Cloud](/cloud/overview) Temporal Cloud is a managed, hosted Temporal environment that provides a platform for Temporal Applications. #### [Temporal Cloud Account Id](/cloud/namespaces#temporal-cloud-account-id) A Temporal Cloud Account Id is a unique identifier for a customer. #### [Temporal Cloud Namespace Id](/cloud/namespaces#temporal-cloud-namespace-id) A Cloud Namespace Id is a globally unique identifier for a Namespace in Temporal Cloud. #### [Temporal Cloud Namespace Name](/cloud/namespaces#temporal-cloud-namespace-name) A Cloud Namespace Name is a customer-supplied name for a Namespace in Temporal Cloud. #### [Temporal Cloud gRPC Endpoint](/cloud/namespaces#temporal-cloud-grpc-endpoint) A Cloud gRPC Endpoint is a Namespace-specific address used to access Temporal Cloud from your code. #### [Temporal Cluster](/temporal-service) The term "Temporal Cluster" is being phased out. Instead the term [Temporal Service](#temporal-service) is now being used. #### [Temporal Service](/temporal-service) A Temporal Service is a Temporal Server paired with Persistence and Visibility stores. #### [Temporal Service configuration](/temporal-service/configuration) Temporal Service configuration is the setup and configuration details of your Temporal Service, defined using YAML. #### [Temporal Cron Job](/cron-job) A Temporal Cron Job is the series of Workflow Executions that occur when a Cron Schedule is provided in the call to spawn a Workflow Execution. #### [Temporal Platform](/temporal#temporal-platform) The Temporal Platform consists of a Temporal Service and Worker Processes. #### [Temporal SDK](/encyclopedia/temporal-sdks) A Temporal SDK is a language-specific library that offers APIs to construct and use a Temporal Client to communicate with a Temporal Service, develop Workflow Definitions, and develop Worker Programs. #### [Temporal Server](/temporal-service/temporal-server) The Temporal Server is a grouping of four horizontally scalable services. #### [Temporal Web UI](/web-ui) The Temporal Web UI provides users with Workflow Execution state and metadata for debugging purposes. #### [Timer](/workflow-execution/timers-delays) Temporal SDKs offer Timer APIs so that Workflow Executions are deterministic in their handling of time values. #### [Update](/sending-messages#sending-updates) An Update is a request to and a response from Workflow Execution. #### [Visibility](/temporal-service/visibility) The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Temporal Service. #### [Worker](/workers#worker) In day-to-day conversations, the term Worker is used to denote both a Worker Program and a Worker Process. Temporal documentation aims to be explicit and differentiate between them. #### [Worker Entity](/workers#worker-entity) A Worker Entity is the individual Worker within a Worker Process that listens to a specific Task Queue. #### [Worker Process](/workers#worker-process) A Worker Process is responsible for polling a Task Queue, dequeueing a Task, executing your code in response to a Task, and responding to the Temporal Server with the results. #### [Worker Program](/workers#worker-program) A Worker Program is the static code that defines the constraints of the Worker Process, developed using the APIs of a Temporal SDK. #### [Worker Service](/temporal-service/temporal-server#worker-service) The Worker Service runs background processing for the replication queue, system Workflows, and (in versions older than 1.5.0) the Kafka visibility processor. #### [Worker Session](/task-routing#worker-session) A Worker Session is a feature provided by some SDKs that provides a straightforward way to ensure that Activity Tasks are executed with the same Worker without requiring you to manually specify Task Queue names. #### [Workflow](/workflows) In day-to-day conversations, the term "Workflow" frequently denotes either a Workflow Type, a Workflow Definition, or a Workflow Execution. #### [Workflow Definition](/workflow-definition) A Workflow Definition is the code that defines the constraints of a Workflow Execution. #### [Workflow Execution](/workflow-execution) A Temporal Workflow Execution is a durable, scalable, reliable, and reactive function execution. It is the main unit of execution of a Temporal Application. #### [Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout) A Workflow Execution Timeout is the maximum time that a Workflow Execution can be executing (have an Open status) including retries and any usage of Continue As New. #### [Workflow History Export](/cloud/export) Workflow History export allows users to export Closed Workflow Histories to a user's Cloud Storage Sink. #### [Workflow Id](/workflow-execution/workflowid-runid#workflow-id) A Workflow Id is a customizable, application-level identifier for a Workflow Execution that is unique to an Open Workflow Execution within a Namespace. #### [Workflow Id Conflict Policy](/workflow-execution/workflowid-runid#workflow-id-conflict-policy) A Workflow Id Conflict Policy determines how to resolve the conflict when spawning a new Workflow Execution with a particular Workflow Id that is used by an Open Workflow Execution already. #### [Workflow Id Reuse Policy](/workflow-execution/workflowid-runid#workflow-id-reuse-policy) A Workflow Id Reuse Policy determines whether a Workflow Execution is allowed to spawn with a particular Workflow Id, if that Workflow Id has been used with a previous, and now Closed, Workflow Execution. #### [Workflow Run Timeout](/encyclopedia/detecting-workflow-failures#workflow-run-timeout) This is the maximum amount of time that a single Workflow Run is restricted to. #### [Workflow Task](/tasks#workflow-task) A Workflow Task is a Task that contains the context needed to make progress with a Workflow Execution. #### [Workflow Task Execution](/tasks#workflow-task-execution) A Workflow Task Execution occurs when a Worker picks up a Workflow Task and uses it to make progress on the execution of a Workflow Definition. #### [Workflow Task Timeout](/encyclopedia/detecting-workflow-failures#workflow-task-timeout) A Workflow Task Timeout is the maximum amount of time that the Temporal Server will wait for a Worker to start processing a Workflow Task after the Task has been pulled from the Task Queue. #### [Workflow Type](/workflow-definition#workflow-type) A Workflow Type is a name that maps to a Workflow Definition. ## Deprecated terms #### tctl (_deprecated_) tctl is a command-line tool that you can use to interact with a Temporal Service. It is superseded by the [Temporal CLI utility](#cli). --- ## Managing Temporal Cloud Access Control Temporal Cloud supports two secure authentication methods for Workers: - **mTLS Certificates** - **API Keys** (configured via the UI when creating a namespace) Both options help secure communication between workers and Temporal Cloud. Choosing the right method and managing it properly is key to maintaining security and minimizing downtime. The high-level end-to-end rotation process is: 1. **Generate new credentials**: Create new certificates or API keys in Temporal Cloud before the current ones expire 2. **Support dual credentials**: Update Temporal Cloud to support both old and new credentials 3. **Migrate Workers**: Transition Worker applications from old credentials to new credentials 4. **Validate connectivity**: Confirm all Workers can authenticate, and business processes operate normally with new credentials 5. **Remove old credentials**: Remove old certificates and API keys from your secrets provider after confirming successful migration This approach ensures near-zero-downtime rotation and prevents authentication failures that could impact running workflows. For specific guidance to rotate mTLS certificates and API keys, see: - https://docs.temporal.io/cloud/certificates#manage-certificates - https://docs.temporal.io/cloud/api-keys#rotate-an-api-key - https://github.com/temporal-sa/temporal-Worker-cert-rotation For mutual TLS (mTLS) implementations, using Let's Encrypt is not recommended, as it is designed primarily for public-facing services and lacks support for internal certificate requirements. While we are not making a specific product recommendation, there are several valid options for managing certificates. Many organizations choose vendor solutions such as AWS Private CA, Setigo, Microsoft Certification Authority, or DigiCert for their robust integration and lifecycle features. Alternatively, self-signed certificates are a valid and commonly used approach, even in production environments. If you choose to self-sign, tools like [OpenSSL](https://openssl-library.org/), [CFSSL](https://github.com/cloudflare/cfssl), or [step CLI](https://github.com/smallstep/cli) can help generate and manage certificates effectively. Select the option that aligns best with your infrastructure, security requirements, and operational needs. In the case that you are using multiple certificates signed by the same CA, and some of these certificates are for production environments, there are some workarounds you can employ. One convention is to give certificates a common name that matches the namespace. If you do this when using the same CA for dev and prod, then you can leverage Certificate Filters to prevent access to production environments. This is described in detail under the [authorization section](https://docs.temporal.io/cloud/certificates#control-authorization) of the documentation. ## Best practices: #### 1. Establish clear guidelines on authentication methods: Teams should standardize on either [mTLS certificates](https://docs.temporal.io/cloud/certificates) or [API keys](https://docs.temporal.io/cloud/api-keys) for the following operations: - Connect Temporal clients to Temporal Cloud (e.g. Worker processes) - Automation (e.g. Temporal Cloud [Operations API](https://docs.temporal.io/ops), [Terraform provider](https://docs.temporal.io/cloud/terraform-provider), [Temporal CLI](https://docs.temporal.io/cli/setup-cli)) By default, it is recommended for teams to use API keys and [service accounts](https://docs.temporal.io/cloud/service-accounts) for both operations because API keys are easier to manage and rotate for most teams. In addition, you can control account-level and namespace-level roles for service accounts. If your organization requires mutual authentication and stronger cryptographic guarantees, then it is encouraged for your teams to use mTLS certificates to authenticate Temporal clients to Temporal Cloud and use API keys for automation (because Temporal Cloud [Operations API](https://docs.temporal.io/ops) and [Terraform provider](https://docs.temporal.io/cloud/terraform-provider) only supports API key for authentication) #### 2. Use Certificate Filters to restrict access when using shared CAs (e.g., `dev` vs `prod`): Certificate Filters are an additional way of validating using the client certificate presented during client authenticationGive certificates a common name that matches the namespace. This is not a requirement. If you do this when using the same CA for dev and prod environments, then you can leverage Certificate Filters to prevent access to production. --- ## Best practices These guides outline foundational principles and best practices for using Temporal Cloud. It exists to provide a **validated, opinionated** framework that helps teams that either do not have an enablement plan for or want to evaluate and refine their use of Temporal. ## Overview Without clearly defined Temporal standards, organizations often struggle with inconsistent Workflow implementations, fragmented best practices, and misaligned development approaches. This documentation framework helps developers establish robust Temporal standards by providing: - **Proven foundation principles** that have been validated across diverse use cases - **Standardized implementation patterns** for teams to adopt consistently across projects - **Confidence in alignment** with Temporal's architectural principles and recommended practices By following this guidance, developers can define comprehensive Temporal standards that ensure their workflow orchestration implementations are maintainable, scalable, and aligned with platform best practices from the start. ## Target audience This section is intended for: - Developers responsible for building a Temporal Cloud practice within their organization. - Anyone building tutorials, courses, onboarding paths, or documentation - Partners or vendors creating Temporal-related learning materials ## Available guides - **[Managing a Namespace](./managing-namespace.mdx)** Best practices for configuring, managing, and optimizing Temporal Namespaces. - **[Managing Temporal Cloud Access Control](./cloud-access-control.mdx)** Guidelines for implementing proper access control and user management in Temporal Cloud. - **[Security Controls for Temporal Cloud](./security-controls.mdx)** Comprehensive security practices for protecting your Temporal Cloud deployment. - **[Worker Deployment and Performance](./worker.mdx)** Best practices for deploying and optimizing Temporal Workers for performance and reliability. --- ## Managing Actions per Second (APS) Limits in Temporal Cloud If you're running Workflows on Temporal Cloud, you've probably noticed that each Namespace comes with an Actions Per Second (APS) limit. But what exactly does that mean, and why does it matter? In Temporal, an "action" is any operation that modifies Workflow state or interacts with the Temporal service. Your Namespace's APS limit controls how many of these operations can happen per second across all Workflows within that Namespace. When the APS limit is reached, Temporal begins to throttle requests. Depending on the business priority of the Workflow, this may be fine or it may have significant impact. The difficulty is that APS consumption isn't always intuitive. A single Workflow Execution generates multiple actions from the moment it starts, and use cases that fit nicely within APS limits at small scale can exhaust those limits as they grow. Many customers are surprised to find they're hitting APS constraints well before they expected to based on their Workflow count alone. This guide will help you understand why customers hit APS limits, how to design Workflows that use actions efficiently, and what to do when you're approaching capacity. When design changes aren't enough, Temporal Cloud offers [Provisioned Capacity Mode](#provisioned-capacity-and-trus) that let you reserve additional capacity using Temporal Resource Units (TRUs) for spiky or unpredictable workloads. Whether you're just getting started with Temporal Cloud or optimizing an existing deployment, managing APS effectively is key to building scalable, reliable applications. ## Understanding Actions in Temporal Before we dive into why customers hit APS limits, let's talk about what actions are. ### What Counts as an Action? In Temporal, actions are the fundamental operations that drive your Workflows forward. Here's an overview of what counts, with [the full list in our documentation](/cloud/actions). - Workflows: Starting, completing, resetting. Also starting Child Workflows, as well as Schedules and Timers - Activities: Starting, retrying, Heartbeating - Signals, Updates, and Queries Actions that count toward an APS limit are, with a few exemptions, the same as actions that are billable. The key insight here is that nearly everything that happens in Temporal--state changes, decision points, interactions--is counted as an action. ### The Action Multiplier Effect What this means is that when you start a single Workflow, you're not performing just one action as it relates to APS because a Workflow isn’t a single atomic operation, it’s a series of events that Temporal orchestrates. Each Activity at the start of the Workflow is an Action, so there can be a burst of Activities at the start of a Workflow. Additionally, there are often business reasons to start multiple Workflows at the same time. These can all contribute to the multiplier effect. ### The Effect of Rate Limiting In Temporal Cloud, the effect of rate limiting is increased latency, not lost work. Workers [might take longer](/cloud/service-availability#throughput) to complete Workflows. ## Common Reasons Customers Hit APS Limits Now that you understand how actions are defined and how they count toward APS limits, let's look at the patterns that most commonly push customers into APS constraints. ### Bursty Traffic Most businesses don't operate at constant velocity—they have rhythms, cycles, and spikes. These patterns can create APS challenges because Temporal Cloud enforces limits at the per-second level. Common bursty patterns include: - Calendar-driven spikes: Month-end financial close processes, quarterly reporting Workflows, payroll that runs on the 1st and 15th, scheduled batch jobs that kick off at midnight. These create predictable but intense load concentrations. - Event-driven surges: Product launches, marketing campaigns, flash sales, breaking news, or seasonal events like Black Friday. - Recovery scenarios: When a downstream dependency fails and then recovers, you often get a thundering herd effect—hundreds or thousands of Workflows that were waiting all suddenly resume execution simultaneously, creating an artificial spike in APS consumption. - Geographic/business hours concentration: Global applications often see load follow the sun, with peak activity during business hours in each region. If your business concentrates in specific markets, you'll see daily peaks rather than even 24/7 distribution. - Retry Storms: when a large number of Workflows get stuck on an Activity, and that Activity is failing, if retry delay is very short, this can cause a spike in Actions. - Timer Storms: a large number of Workflows all set a Timer for the exact same time--triggering a spike as those Timers fire and then Activities run, causing a lot of actions all at the same time. These types of processes can result in your Namespace averaging 200 APS over a day, but spiking to 800 APS or more during your peak hour/day/event/etc. #### How to Mitigate You can’t change the patterns of how customers interact with your systems, but there are some adjustments you can make to your Workflows to make traffic patterns more consistent, especially for use cases where immediate response isn’t necessary. These adjustments include: - Implement application-level queuing or rate limiting to smooth out predictable spikes. - For scheduled batch operations, stagger start times rather than triggering everything at once--implement jitter in your high-volume [Schedules](/schedule#spec). - Implement jitter when starting Workflows, such as with [Start Delay](/workflow-execution/timers-delays#delay-workflow-execution). - Accept rate limiting - [Provisioned Capacity](/cloud/capacity-modes#provisioned-capacity) ### Cascading Workflows and Fan-Out Patterns Decomposing complex processes into parent and Child Workflows (or with Nexus) is a common and often appropriate pattern, but the APS costs multiply dramatically with depth and fan-out. Consider an order fulfillment Workflow that spawns Child Workflows for payment processing, inventory management, shipping, and customer notifications. Each Child Workflow goes through its full action lifecycle (start, tasks, activities, completion), and all of those actions count toward the APS limits on your Namespace. This pattern appears frequently in: - Batch processing: A parent workflow processes a file with 1,000 records, spawning a Child Workflow for each record. Batch processing is also often bursty whenever the batch begins. - Map-reduce patterns: Data processing Workflows that fan out to process partitions in parallel, then aggregate results. This challenge additionally compounds when you have multiple levels of nesting--parent Workflows that create children, which create their own children. #### How to Mitigate - Evaluate whether Child Workflows are necessary--other options include Activities or Workflows in another Namespace (via Nexus) - When you do use Child Workflows, limit fan-out size--design a Child Workflow to process its work in batches rather than one Child per work item. [This sample application](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/batch/slidingwindow) shows more detail. - Consider flattening deeply nested hierarchies into shallower structures. ### Human-in-the-Loop Processes at Scale Workflows that incorporate human decision-making--approvals, reviews, manual data entry, quality checks--tend to be long-running and interaction-intensive, which creates sustained APS load. These Workflows can involve Queries from UIs to display current state and pending tasks. At small scale, this is manageable. But when you're running thousands of them at the same time--like a content moderation queue with pending reviews, or a loan approval system processing applications, or a support ticket system managing thousands of open cases--the cumulative APS load from all of those long-running Workflows adds up. #### How to Mitigate - Avoid polling patterns where UIs constantly query Workflow state. Instead, push state changes to a database that UIs can read. ### Real-Time SLAs and Deadline Management Businesses with strict service level agreements often implement active monitoring and escalation in their Workflows. This is generally accomplished by setting Timers every [x] minutes to determine if an SLA deadline is approaching, allowing the Workflow to trigger escalations or alerts. Each of these Timers/monitoring actions affect APS. When you have thousands of in-flight Workflows all actively monitoring their own SLAs, the background load becomes significant. You're consuming substantial APS capacity even when Workflows aren't doing their primary work. #### How to Mitigate - Use longer monitoring intervals where possible. For example, check SLAs every 30 minutes rather than every 1 minute. - Where possible, consolidate Timers. Rather than 10 Timers that check 10 tasks, have 1 Timer and then check those 10 tasks. - Where possible, have an external system signal your Workflow rather than using short-lived Timers to poll. - For retries, use exponential backoff with reasonable initial intervals. ## Additional Design Patterns There are some design patterns that can lead to high APS that are consistent across many different types of business use cases. ### Many Small Activities Consider two approaches to processing 1,000 records: - Approach A: Create a Workflow that spawns 1,000 separate activities, one per record. - Approach B: Create a Workflow that spawns 10 activities, each processing 100 records in a batch. Approach B will clearly result in less APS. This is a simple example, but this pattern shows up everywhere: processing individual transactions versus batches, sending individual notifications versus bulk operations, or making separate API calls versus batch endpoints. Each separate Activity adds Action overhead. #### How to Mitigate - Consider if you can combine multiple external calls within a single Activity. - If processing a large amount of data, process it in chunks. - See [How Many Activities should I use in my Temporal Workflow?](https://temporal.io/blog/how-many-activities-should-i-use-in-my-temporal-workflow) for more information. ### Multiple Use Cases in One Namespace Often when starting with Temporal, the first use case is implemented in a single Namespace, generally one per logical environment. When the second Temporal use case is implemented, it runs in the same Namespace, the same for the third, fourth, etc. An APS limit is set per Namespace, so multiple use cases with multiple traffic patterns in the same Namespace can exhaust this limit quickly. #### How to Mitigate Plan for a set of Namespaces (one per environment) per use case. See [Managing a Namespace](/best-practices/managing-namespace) for more details. ## Provisioned Capacity and TRUs The strategies above help you design Workflows that use actions efficiently. But sometimes you need more capacity than the on-demand model provides, especially for spiky or unpredictable workloads. Temporal Cloud offers two [Capacity Modes](/cloud/capacity-modes): - **On-Demand mode** (default): Your Namespace automatically scales based on your trailing 7-day usage. This works well for steady, predictable workloads. - **Provisioned mode**: You reserve capacity by adding Temporal Resource Units (TRUs), giving you guaranteed headroom for traffic spikes. See [Capacity Modes](/cloud/capacity-modes) for complete details on TRUs, available increments, and how to manage capacity via UI, CLI, or API. ### Choosing the Right Approach Use Provisioned capacity when the on-demand model can't respond quickly enough: | Scenario | Pattern | Recommendation | |----------|---------|----------------| | **Planned spikes** | Promotions, holiday traffic, product launches | Pre-provision TRUs before the event starts | | **Unplanned spikes** | Sudden traffic surges, viral events | React instantly via UI/CLI/API when you see throttling | | **Load testing** | Validating new services at scale | Provision TRUs for the test, deprovision after | | **Batch jobs** | Scheduled high-throughput jobs | Automate TRU scaling via API around job schedules | | **Migrations** | Onboarding a new workload faster than on-demand adjusts | Bridge with TRUs for approximately 7 days while the on-demand envelope catches up | :::note When switching back to on-demand mode, your APS limit resets to the running average from the last 7 days. Plan for this if your workload is sensitive to the transition. ::: ### Cost Optimization Tips See [Capacity Modes Pricing](/cloud/pricing#capacity-modes-pricing) for billing details. To minimize costs: - Provision only when you need extra capacity - Deprovision promptly after spikes end - For predictable patterns, automate scaling to minimize time in provisioned mode ### Automation Best Practices Since you understand your workload patterns better than any auto-scaling system, consider building your own TRU automation: - **Use the [Cloud Ops API](/ops), [Terraform Provider](/cloud/terraform-provider), or [tcld CLI](/cloud/tcld)** to programmatically scale capacity based on your application's signals - **Set utilization thresholds**: For example, scale up when hitting 70-80% of your limit, scale down after sustained low usage - **Schedule capacity changes**: Use [Temporal Schedules](/schedule) or Workflows to increase TRUs before known events - **React to leading indicators**: If your application has upstream signals (incoming order queue depth, marketing campaign start), use those to trigger capacity changes proactively ## Knowing if You're Hitting APS Limits In addition to understanding the patterns that can affect APS limits on a Temporal Namespace, it's also important to know if you're approaching (or exceeding) these limits. Temporal Cloud provides several metrics that, if tracked, will tell you if you're being rate limited due to APS. See the documentation on [detecting resource exhaustion](/cloud/service-health#rps-aps-rate-limits) for an explanation of those metrics as well as a sample Grafana dashboard that shows how they could be viewed. ### Monitoring for TRU Decisions If you're considering Provisioned capacity, set up monitoring to understand your usage patterns: - **Use [OpenMetrics](/cloud/metrics/openmetrics)**: For real-time visibility into APS consumption, integrate Temporal Cloud metrics with your observability stack - **Track APS usage vs. limits**: Monitor `temporal_cloud_v0_resource_exhausted_errors` to detect throttling events - **Set alerts at 70-80% utilization**: This gives you time to provision TRUs before hitting limits - **Analyze historical patterns**: Understanding your traffic patterns helps you decide between reactive TRU provisioning and proactive automation ## Key Takeaways Let's recap the main reasons customers hit APS limits and how to address them: | Reason for Hitting APS Limits | How to Address It | |-------------------------------|-------------------| | Bursty Traffic | Implement application-level queuing or rate limiting to smooth spike, stagger start times for scheduled batch operations. | | Cascading Workflows and Fan-Out Patterns | Evaluate if Child Workflows are necessary (consider activities or another Namespace), limit fan-out size by processing work in batches within a Child Workflow, consider flattening deeply nested hierarchies. | | Human-in-the-Loop Processes at Scale | Design long-running Workflows to minimize sustained APS load from interaction (by avoiding polling where UIs constantly Query state and using Signals only for key human inputs). | | Many small activities | Consider if you can combine multiple external calls within a single Activity. If processing a large amount of data, process it in chunks. | | Multiple use cases in one Namespace | Plan for a set of Namespaces (one per environment) per use case. | | Planned traffic spikes | Pre-provision TRUs before the event, then deprovision after. | | Unpredictable spikes requiring instant response | Switch to Provisioned mode for self-service capacity scaling via UI, CLI, or API. | | Load testing at scale | Provision TRUs for the test duration, deprovision when complete. | | New workload onboarding | Bridge with TRUs while the on-demand envelope adjusts (approximately 7 days). | ## General guidance When designing Temporal Workflows with an eye toward APS limits, ask yourself the following questions: - How many actions will a single execution of this Workflow consume? - How many Workflows will typically be running at the same time? - What happens to APS consumption when the number of Actions * number of active Workflows scales to 100x current volume? - Are there natural opportunities to combine operations: combine activities, or process chunks of data together? - Am I polling when I could be using Signals? - Does this Workflow need to run continuously, or can it be event-driven? A few hours spent optimizing Workflow design can save you from capacity crunches, emergency limit increases, and potentially significant cost increases down the road. --- ## Namespace Best Practices :::info Applies to both open source and Temporal Cloud This page covers namespace best practices that apply to **both** open source Temporal and Temporal Cloud. Platform-specific guidance is clearly labeled throughout. For reference documentation, see: - [Namespace concepts](/namespaces) - [Managing Namespaces (open source)](/self-hosted-guide/namespaces) - [Namespaces (Temporal Cloud)](/cloud/namespaces) ::: A [Namespace](/namespaces) is a unit of isolation within the Temporal Platform. It ensures that Workflow Executions, Task Queues, and resources are logically separated, preventing conflicts and enabling safe multi-tenant usage. ## Naming Conventions ### Use lowercase and hyphens Use lowercase letters and hyphens (`-`) as separators in Namespace names. - **Temporal Cloud**: Namespace names are case-insensitive, so `MyNamespace` and `mynamespace` refer to the same Namespace. - **Open source**: Namespace names are case-sensitive, so `MyNamespace` and `mynamespace` are different Namespaces. To avoid confusion across environments, always use lowercase. **Example**: `payment-checkout-prd` ### Follow a consistent naming pattern Use a pattern like `--` to name Namespaces: | Component | Max Length | Examples | |-----------|------------|----------| | Use case | 10 chars | `payments`, `fulfill`, `orders` | | Domain | 10 chars | `checkout`, `notify`, `inventory` | | Environment | 3 chars | `dev`, `stg`, `prd` | **Examples**: `payments-checkout-dev`, `fulfill-notify-prd`, `orders-inventory-stg` **Why this pattern?** - Simple and easy to understand - Clearly separates environments - Groups related services under domains - Allows platform teams to implement chargeback to application teams - Namespace-level limits are isolated between different services and environments :::tip Temporal Cloud Cloud Namespace names are limited to [39 characters](/cloud/namespaces#temporal-cloud-namespace-name). If you need to include region, use short codes (e.g., `aps1`, `use1`). ::: ## Organizational Patterns ### Pattern 1: Namespace per use case and environment For simple configurations without multiple services or team boundaries. **Naming convention**: `_` **Example**: `payments_prod`, `orders_dev` ### Pattern 2: Namespace per use case, service, and environment When multiple services that are part of the same use case communicate externally to Temporal via API (HTTP/gRPC). **Naming convention**: `__` **Example**: `payments_gateway_prod`, `payments_processor_prod` ### Pattern 3: Namespace per use case, domain, and environment When multiple services need to communicate with each other, use [Temporal Nexus](/nexus) to connect Workflows across Namespace boundaries. This provides better security, fault isolation, and modularity than sharing a Namespace. **Naming convention**: `__` **Example**: `payments_checkout_prod`, `payments_refunds_prod` For systems without Nexus, services can communicate via [Signals](/sending-messages#sending-signals) or [Child Workflows](/child-workflows) within the same Namespace. :::note Workflow ID uniqueness When multiple teams share a Namespace, prefix each Workflow ID with a service-specific string to ensure uniqueness. Task Queue names must also be unique within the Namespace. ::: ## Production Safeguards ### Use an Authorizer (open source only) {#authorizer} Use a custom [Authorizer](/self-hosted-guide/security#authorizer-plugin) on your Frontend Service to set restrictions on who can create, update, or deprecate Namespaces. If an Authorizer is not set, Temporal uses the `nopAuthority` authorizer that unconditionally allows all API calls. On Temporal Cloud, [role-based access controls](/cloud/users#namespace-level-permissions) provide namespace-level authorization without custom configuration. ### Enable deletion protection (Temporal Cloud only) {#deletion-protection} [Enable deletion protection](/cloud/namespaces#delete-protection) for production Namespaces to prevent accidental deletion. ### Enable High Availability (Temporal Cloud only) {#high-availability} For business-critical use cases with strict uptime requirements, enable [High Availability features](/cloud/high-availability) for a [99.99% contractual SLA](/cloud/high-availability#high-availability-features). ### Use Infrastructure as Code (Temporal Cloud only) {#terraform} Use the [Temporal Cloud Terraform provider](/cloud/terraform-provider) to manage Namespaces. If Terraform isn't suitable, scripting against the [Cloud Ops API](/ops) or [tcld](/cloud/tcld) is a good alternative. This provides: - Documentation of each Namespace's purpose and owners - Prevention of infrastructure drift - Version-controlled configuration changes Use `prevent_destroy = true` in your Terraform configuration to prevent accidental Namespace deletion via Terraform. This is separate from [Temporal Cloud deletion protection](/cloud/namespaces#delete-protection), which prevents deletion through any interface. **Reference**: [Example Terraform configuration](https://github.com/kawofong/temporal-terraform) ## Tagging (Temporal Cloud only) {#tagging} [Tags](/cloud/namespaces#tag-a-namespace) are key-value metadata pairs that help organize, track, and manage Namespaces. Tags complement your naming convention by adding metadata that doesn't fit in the Namespace name. While the name captures use case, domain, and environment, tags can capture additional dimensions like team ownership, data sensitivity, or business criticality. ### Recommended tag categories | Tag Key | Purpose | Examples | |---------|---------|----------| | `environment` | Deployment stage | `dev`, `staging`, `production` | | `team` | Owning team | `platform`, `payments`, `identity` | | `division` | Business unit | `engineering`, `finance`, `ops` | | `criticality` | Business importance | `high`, `medium`, `low` | | `data-sensitivity` | Data classification | `pii`, `pci`, `public` | | `latency-sensitivity` | Performance tier | `realtime`, `batch`, `async` | For tag structure, limits, and management instructions, see [How to tag a Namespace](/cloud/namespaces#tag-a-namespace). ## SDK Client Configuration Set Namespaces in your SDK Client to isolate your Workflow Executions. If you do not set a Namespace, all Workflow Executions started using the Client will be associated with the `default` Namespace. You must register a Namespace before setting it in your Client. For configuration details, see: - [Namespace concepts](/namespaces) - [Namespaces (Temporal Cloud)](/cloud/namespaces#access-namespaces) --- ## Worker deployment and performance This document outlines best practices for deploying and optimizing Workers to ensure high performance, reliability, and scalability. It covers deployment strategies, scaling techniques, tuning recommendations, and monitoring approaches to help you get the most out of your Temporal Workers. We also provide a reference application, the Order Management System (OMS), that demonstrates the deployment best practices in action. You can find the OMS codebase on [GitHub](https://github.com/temporalio/reference-app-orders-go/tree/main/docs). ## Quick checklist Designing a comprehensive Worker deployment strategy to optimize production performance involves many considerations. We provide a quick checklist to help you get started. Before deploying Workers to production, ensure you address the following. Follow the links to the relevant sections for more details. - **[Configure each Worker appropriately](#actively-tune-worker-options-instead-of-relying-on-defaults)**: Actively tune Worker options based on your code, language runtime limits, and system resource constraints. Don't rely on defaults, which are designed for ease in development and testing, but not optimal for production environments. - **[Deploy enough Workers](#interpret-metrics-as-a-whole)**: Monitor performance metrics and scale Workers to meet your workload requirements. - **[Separate Task Queues logically](#separate-task-queues-logically)**: Size and split work across Task types (Activities and Workflows) and Task Queues based on workload characteristics. - **[Version Workers for safe deployments](#use-worker-versioning-to-safely-deploy-new-workflow-code)**: Ensure you can deploy new Workflow code without breaking running Executions. - **Run benchmarks**: Test your configuration under realistic load to confirm limits and settings are appropriate for your environment. ## Deployment and lifecycle management Well-designed Worker deployment ensures resilience, observability, and maintainability. A Worker should be treated as a long-running service that can be deployed, upgraded, and scaled in a controlled way. ### Package and configure Workers for flexibility Workers should be artifacts produced by a CI/CD pipeline. Inject all required parameters for connecting to Temporal Cloud or a self-hosted Temporal Service at runtime via environment variables, configuration files, or command-line parameters. This allows for more granularity, easier testability, easier upgrades, scalability, and isolation of Workers. In the order management reference app, Workers are packaged as Docker images with configuration provided via environment variables and mounted configuration files. The following Dockerfile uses a multi-stage build to create a minimal, production-ready Worker image: {/* SNIPSTART oms-dockerfile-worker */} [Dockerfile](https://github.com/temporalio/reference-app-orders-go/blob/main/Dockerfile) ```Dockerfile FROM golang:1.24.1 AS oms-builder WORKDIR /usr/src/oms COPY go.mod go.sum ./ RUN --mount=type=cache,target=/go/pkg/mod \ --mount=type=cache,target=/root/.cache/go-build \ go mod download COPY app ./app COPY cmd ./cmd RUN --mount=type=cache,target=/go/pkg/mod \ --mount=type=cache,target=/root/.cache/go-build \ CGO_ENABLED=0 go build -v -o /usr/local/bin/oms ./cmd/oms FROM busybox AS oms-worker ``` {/* SNIPEND oms-dockerfile-worker */} This Dockerfile uses a multi-stage build pattern with two stages: 1. `oms-builder` stage: compiles the Worker binary. 1. Copies dependency files and downloads dependencies using BuildKit cache mounts to speed up subsequent builds. 2. Copies the application code and builds a statically linked binary that doesn't require external libraries at runtime. 2. `oms-worker` stage: creates a minimal final image. 1. Copies only the compiled binary from the `oms-builder` stage. 2. Sets the entrypoint to run the Worker process. The entrypoint `oms worker` starts the Worker process, which reads configuration from environment variables at runtime. For example, the [Billing Worker deployment in Kubernetes](https://github.com/temporalio/reference-app-orders-go/blob/main/deployments/k8s/billing-worker-deployment.yaml) uses environment variables to configure the Worker: {/* SNIPSTART oms-billing-worker-deployment {"selectedLines": ["20-35"]} */} [deployments/k8s/billing-worker-deployment.yaml](https://github.com/temporalio/reference-app-orders-go/blob/main/deployments/k8s/billing-worker-deployment.yaml) ```yaml --- # ... spec: containers: - args: - -k - supersecretkey - -s - billing env: - name: FRAUD_API_URL value: http://billing-api:8084 - name: TEMPORAL_ADDRESS value: temporal-frontend.temporal:7233 image: ghcr.io/temporalio/reference-app-orders-go-worker:latest name: billing-worker imagePullPolicy: Always enableServiceLinks: false ``` {/* SNIPEND */} ### Separate Task Queues logically Use separate Task Queues for distinct workloads. This isolation allows you to control rate limiting, prioritize certain workloads, and prevent one workload from starving another. For each Task Queue, ensure you configure at least two Workers to poll the Task Queue. In the order management reference app, each microservice has its own Task Queue. For example, the Billing Worker polls the `billing` Task Queue, while the Order Worker polls the `order` Task Queue. This separation allows each service to scale independently based on its workload. The following code snippet shows how the Billing Worker is set up to poll its Task Queue. The default value for `TaskQueue` is a constant defined in the `api.go` configuration file and is set to `billing`. Since Task Queues are created dynamically when first used, a mismatch between the Client and Worker Task Queue names does not result in an error. Instead, it creates two different Task Queues, and the Worker never receives Tasks from the Temporal Service because it's polling the wrong queue. Define the Task Queue name as a constant that both the Client and Worker reference to avoid this issue. {/* SNIPSTART oms-billing-worker-go {"selectedLines": ["12-23"]} */} [app/billing/worker.go](https://github.com/temporalio/reference-app-orders-go/blob/main/app/billing/worker.go) ```go // ... // RunWorker runs a Workflow and Activity worker for the Billing system. func RunWorker(ctx context.Context, config config.AppConfig, client client.Client) error { w := worker.New(client, TaskQueue, worker.Options{ MaxConcurrentWorkflowTaskPollers: 8, MaxConcurrentActivityTaskPollers: 8, }) w.RegisterWorkflow(Charge) w.RegisterActivity(&Activities{FraudCheckURL: config.FraudURL}) return w.Run(temporalutil.WorkerInterruptFromContext(ctx)) } ``` {/* SNIPEND */} ### Use Worker Versioning to safely deploy new Workflow code Use Worker Versioning to deploy new Workflow code without breaking running Executions. Worker Versioning lets you map each Workflow Execution to a specific Worker Deployment Version identified by a build ID, which guarantees that pinned Workflows always run on the same Worker version where they started. To learn more about versioning Workflows, see the [Workflow Versioning](/production-deployment/worker-deployments/worker-versioning.mdx) guide or take our [Worker Versioning course](https://learn.temporal.io/courses/worker_versioning/). :::tip In addition to Worker Versioning, you can also use [Patching](/patching) to introduce changes to your Workflow code without breaking running Executions. Patching reduces complexity on the infrastructure side compared to Worker Versioning, but it introduces some complexity on the Workflow code side. Choose the approach that best fits your needs. ::: ### Manage Event History growth If a Worker goes offline and another Worker picks up the same Workflow Execution, the new Worker must replay the existing Event History to resume the Workflow Execution. If the Event History is too large or has too many Events, replay affects the performance of the new Worker and may even cause timeout errors well before the hard limit of 51,200 Events is reached. We recommend not exceeding a few thousand Events in a single Workflow Execution. The best way to handle Event History growth is to use the [Continue-As-New](/workflow-execution/continue-as-new) mechanism to continue under a new Workflow Execution with a new Event History, repeating this process as you approach the limits again. All Temporal SDKs provide functions to suggest when to use [Continue-As-New](/workflow-execution/continue-as-new). For example, the Python SDK has the [`is_continue_as_new_suggested()`](https://python.temporal.io/temporalio.workflow.Info.html#is_continue_as_new_suggested) function that returns a `bool` indicating whether to use Continue-As-New. In addition to the number of Events, monitor the size of the Event History. Input parameters and output values of both Workflows and Activities are stored in the Event History. Storing large amounts of data can lead to performance problems, so the Temporal Cluster limits both the size of individual payloads and the total Event History size. A Workflow Execution may be terminated if any single payload exceeds 2 MB or if the entire Event History exceeds 50 MB. To avoid hitting these limits, avoid passing large amounts of data into and out of Workflows and Activities. A common way to reduce payload and Event History size is the [Claim Check](https://dataengineering.wiki/Concepts/Software+Engineering/Claim+Check+Pattern) pattern, widely used with messaging systems such as Apache Kafka. Instead of passing large data into your function, store that data external to Temporal in a database or file system. Pass an identifier for the data, such as a primary key or path, into the function and use an Activity to retrieve it as needed. If your Activity produces large output, use a similar approach: write the data to an external system and return an identifier that can be used to retrieve it later. ## Scaling, monitoring, and tuning Scaling and tuning are critical to Worker performance and cost efficiency. The goal is to balance concurrency, throughput, and resource utilization while maintaining low Task latency. ### Actively tune Worker options instead of relying on defaults Default Worker settings are designed to work across a wide range of use cases primarily for ease in development and testing. In production environments, your Workflow complexity, Activity duration, payload sizes, and infrastructure constraints all influence optimal Worker configuration. Actively tuning your Workers ensures they perform well under your specific workload conditions. To get started, focus on these key Worker options: - **Task slots**: Limits how many Tasks execute concurrently. Set based on your Worker's CPU, memory, and the resource demands of your code. You can choose different types of Slot Suppliers or implement a custom Slot Supplier to control how Task slots are assigned to different Task types. Refer to [Slot Suppliers](/develop/worker-performance#slot-suppliers) for more details. - **Sticky cache size**: Controls the size of the sticky cache for Workflow Executions. Larger caches reduce replay overhead but consume more memory. Refer to [Workflow Cache Tuning](/develop/worker-performance#workflow-cache-tuning) for more details. - **Poller counts**: Controls the number of pollers for Tasks. We recommend you use the Poller Autoscaling feature to automatically adjust the number of pollers based on your workload. Refer to [Configuring Poller Options](/develop/worker-performance#configuring-poller-options) for more details. Use the metrics listed in [Interpret metrics as a whole](#interpret-metrics-as-a-whole) to guide your tuning decisions. ### Interpret metrics as a whole No single metric tells the full story. The following are some of the most useful Worker-related metrics to monitor. We recommend having all metrics listed below on your Worker monitoring dashboard. When you observe anomalies, correlate across multiple metrics to identify root causes. - Worker CPU and memory utilization - `workflow_task_schedule_to_start_latency` and `activity_task_schedule_to_start_latency` - `worker_task_slots_available` - `temporal_long_request_failure`, `temporal_request_failure`, `temporal_long_request_latency`, and `temporal_request_latency` For example, Schedule-to-Start latency measures how long a Task waits in the queue before a Worker starts it. High latency means your Workers or pollers can’t keep up with incoming Tasks, but the root cause depends on your resource metrics: - High Schedule-to-Start latency and high CPU/memory: Workers are saturated. Scale up your Workers or add more Workers. It's also possible your Workers are blocked on Activities. Refer to [Troubleshooting - Depletion of Activity Task Slots](../troubleshooting/performance-bottlenecks.mdx#depletion-of-temporal_worker_task_slots_available-for-activityworker) for guidance. - High Schedule-to-Start latency and low CPU/memory: Workers are underutilized. Increase the number of pollers, executor slots, or both. If this is accompanied by high `temporal_long_request_latency` or `temporal_long_request_failure`, your Workers are struggling to reach the Temporal Service. Refer to [Troubleshooting - Long Request Latency](../troubleshooting/performance-bottlenecks.mdx#high-temporal_long_request_failure) for guidance. - Low Schedule-to-Start latency and low CPU/memory: Depending on your workload, this could be normal. If you are consistently seeing low memory usage and low CPU usage, you may be over-provisioning your Workers and can consider scaling down. Refer to [Intro to Worker Tuning](https://temporal.io/blog/an-introduction-to-worker-tuning) for more details and examples on how to interpret Worker metrics. ### Optimize Worker cache Workers keep a cache of Workflow Executions to improve performance by reducing replay overhead. However, larger caches consume more memory. The `temporal_sticky_cache_size` tracks the size of the cache. If you observe high memory usage for your Workers and high `temporal_sticky_cache_size`, you can be reasonably sure the cache is contributing to memory pressure. Having a high `temporal_sticky_cache_size` by itself isn't necessarily an issue, but if your Workers are memory-bound, consider reducing the cache size to allow more concurrent executions. We recommend you experiment with different cache sizes in a staging environment to find the optimal setting for your Workflows. Refer to [Troubleshooting - Caching](../troubleshooting/performance-bottlenecks.mdx#caching) for more details on how to interpret the different cache-related metrics. ### Manage scale-down safely Before shutting down a Worker, verify that it does not have too many active Tasks. This is especially relevant if your Workers are handling long-running, expensive Activities. If `worker_task_slots_available` is at or near zero, the Worker is running active Tasks. Shutting it down could trigger expensive retries or timeouts for long-running Activities. Use [Graceful Shutdowns](/encyclopedia/workers/worker-shutdown#graceful-shutdown) to allow the Worker to complete its current Tasks before shutting down. All SDKs provide a way to configure Graceful Shutdowns. For example, the Go SDK has the [`WorkerStopTimeout` option](https://pkg.go.dev/go.temporal.io/sdk@v1.38.0/internal#WorkerOptions) that lets you configure how long the Worker has to complete its current Tasks before shutting down. --- ## Temporal CLI activity command reference {/* NOTE: This is an auto-generated file. Any edit to this file will be overwritten. This file is generated from https://github.com/temporalio/cli/blob/main/temporalcli/commandsgen/commands.yml */} ## complete Complete an Activity, marking it as successfully finished. Specify the Activity ID and include a JSON result for the returned value: ``` temporal activity complete \ --activity-id YourActivityId \ --workflow-id YourWorkflowId \ --result '{"YourResultKey": "YourResultVal"}' ``` Use the following options to change the behavior of this command. **Flags:** **--activity-id** _string_ Activity ID to complete. Required. **--result** _string_ Result `JSON` to return. Required. **--run-id**, **-r** _string_ Run ID. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## fail Fail an Activity, marking it as having encountered an error. Specify the Activity and Workflow IDs: ``` temporal activity fail \ --activity-id YourActivityId \ --workflow-id YourWorkflowId ``` Use the following options to change the behavior of this command. **Flags:** **--activity-id** _string_ Activity ID to fail. Required. **--detail** _string_ Reason for failing the Activity (JSON). **--reason** _string_ Reason for failing the Activity. **--run-id**, **-r** _string_ Run ID. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## pause Pause an Activity. If the Activity is not currently running (e.g. because it previously failed), it will not be run again until it is unpaused. However, if the Activity is currently running, it will run until the next time it fails, completes, or times out, at which point the pause will kick in. If the Activity is on its last retry attempt and fails, the failure will be returned to the caller, just as if the Activity had not been paused. Activities should be specified either by their Activity ID or Activity Type. For example, specify the Activity and Workflow IDs like this: ``` temporal activity pause \ --activity-id YourActivityId \ --workflow-id YourWorkflowId ``` To later unpause the activity, see [unpause](#unpause). You may also want to [reset](#reset) the activity to unpause it while also starting it from the beginning. Use the following options to change the behavior of this command. **Flags:** **--activity-id**, **-a** _string_ The Activity ID to pause. Either `activity-id` or `activity-type` must be provided, but not both. **--activity-type** _string_ All activities of the Activity Type will be paused. Either `activity-id` or `activity-type` must be provided, but not both. Note: Pausing Activity by Type is an experimental feature and may change in the future. **--identity** _string_ The identity of the user or client submitting this request. **--run-id**, **-r** _string_ Run ID. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## reset Reset an activity. This restarts the activity as if it were first being scheduled. That is, it will reset both the number of attempts and the activity timeout, as well as, optionally, the [heartbeat details](#reset-heartbeats). If the activity may be executing (i.e. it has not yet timed out), the reset will take effect the next time it fails, heartbeats, or times out. If is waiting for a retry (i.e. has failed or timed out), the reset will apply immediately. If the activity is already paused, it will be unpaused by default. You can specify `keep_paused` to prevent this. If the activity is paused and the `keep_paused` flag is not provided, it will be unpaused. If the activity is paused and `keep_paused` flag is provided - it will stay paused. Activities can be specified by their Activity ID or Activity Type. ### Resetting activities that heartbeat {#reset-heartbeats} Activities that heartbeat will receive a [Canceled failure](/references/failures#cancelled-failure) the next time they heartbeat after a reset. If, in your Activity, you need to do any cleanup when an Activity is reset, handle this error and then re-throw it when you've cleaned up. If the `reset_heartbeats` flag is set, the heartbeat details will also be cleared. Specify the Activity Type of ID and Workflow IDs: ``` temporal activity reset \ --activity-id YourActivityId \ --workflow-id YourWorkflowId --keep-paused --reset-heartbeats ``` Either `activity-id`, `activity-type`, or `--match-all` must be specified. Activities can be reset in bulk with a visibility query list filter. For example, if you want to reset activities of type Foo: ``` temporal activity reset \ --query 'TemporalResetInfo="property:activityType=Foo"' ``` Use the following options to change the behavior of this command. **Flags:** **--activity-id**, **-a** _string_ The Activity ID to reset. Mutually exclusive with `--query`, `--match-all`, and `--activity-type`. Requires `--workflow-id` to be specified. **--activity-type** _string_ Activities of this Type will be reset. Mutually exclusive with `--match-all` and `activity-id`. **--jitter** _duration_ The activity will reset at random a time within the specified duration. Can only be used with --query. **--keep-paused** _bool_ If the activity was paused, it will stay paused. **--match-all** _bool_ Every activity should be reset. Every activity should be updated. Mutually exclusive with `--activity-id` and `--activity-type`. **--query**, **-q** _string_ Content for an SQL-like `QUERY` List Filter. You must set either --workflow-id or --query. **--reason** _string_ Reason for batch operation. Only use with --query. Defaults to user name. **--reset-attempts** _bool_ Reset the activity attempts. **--reset-heartbeats** _bool_ Reset the Activity's heartbeats. Only works with --reset-attempts. **--restore-original-options** _bool_ Restore the original options of the activity. **--rps** _float_ Limit batch's requests per second. Only allowed if query is present. **--run-id**, **-r** _string_ Run ID. Only use with --workflow-id. Cannot use with --query. **--workflow-id**, **-w** _string_ Workflow ID. You must set either --workflow-id or --query. **--yes**, **-y** _bool_ Don't prompt to confirm signaling. Only allowed when --query is present. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## unpause Re-schedule a previously-paused Activity for execution. If the Activity is not running and is past its retry timeout, it will be scheduled immediately. Otherwise, it will be scheduled after its retry timeout expires. Use `--reset-attempts` to reset the number of previous run attempts to zero. For example, if an Activity is near the maximum number of attempts N specified in its retry policy, `--reset-attempts` will allow the Activity to be retried another N times after unpausing. Use `--reset-heartbeat` to reset the Activity's heartbeats. Activities can be specified by their Activity ID or Activity Type. One of those parameters must be provided. Specify the Activity ID or Type and Workflow IDs: ``` temporal activity unpause \ --activity-id YourActivityId \ --workflow-id YourWorkflowId --reset-attempts --reset-heartbeats ``` Activities can be unpaused in bulk via a visibility Query list filter. For example, if you want to unpause activities of type Foo that you previously paused, do: ``` temporal activity unpause \ --query 'TemporalPauseInfo="property:activityType=Foo"' ``` Use the following options to change the behavior of this command. **Flags:** **--activity-id**, **-a** _string_ The Activity ID to unpause. Mutually exclusive with `--query`, `--match-all`, and `--activity-type`. Requires `--workflow-id` to be specified. **--activity-type** _string_ Activities of this Type will unpause. Can only be used without --match-all. Either `activity-id` or `activity-type` must be provided, but not both. **--jitter** _duration_ The activity will start at random a time within the specified duration. Can only be used with --query. **--match-all** _bool_ Every paused activity should be unpaused. This flag is ignored if activity-type is provided. **--query**, **-q** _string_ Content for an SQL-like `QUERY` List Filter. You must set either --workflow-id or --query. **--reason** _string_ Reason for batch operation. Only use with --query. Defaults to user name. **--reset-attempts** _bool_ Reset the activity attempts. **--reset-heartbeats** _bool_ Reset the Activity's heartbeats. Only works with --reset-attempts. **--rps** _float_ Limit batch's requests per second. Only allowed if query is present. **--run-id**, **-r** _string_ Run ID. Only use with --workflow-id. Cannot use with --query. **--workflow-id**, **-w** _string_ Workflow ID. You must set either --workflow-id or --query. **--yes**, **-y** _bool_ Don't prompt to confirm signaling. Only allowed when --query is present. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## update-options Update the options of a running Activity that were passed into it from a Workflow. Updates are incremental, only changing the specified options. For example: ``` temporal activity update-options \ --activity-id YourActivityId \ --workflow-id YourWorkflowId \ --task-queue NewTaskQueueName \ --schedule-to-close-timeout DURATION \ --schedule-to-start-timeout DURATION \ --start-to-close-timeout DURATION \ --heartbeat-timeout DURATION \ --retry-initial-interval DURATION \ --retry-maximum-interval DURATION \ --retry-backoff-coefficient NewBackoffCoefficient \ --retry-maximum-attempts NewMaximumAttempts ``` You may follow this command with `temporal activity reset`, and the new values will apply after the reset. Either `activity-id`, `activity-type`, or `--match-all` must be specified. Activity options can be updated in bulk with a visibility query list filter. For example, if you want to reset for activities of type Foo, do: ``` temporal activity update-options \ --query 'TemporalPauseInfo="property:activityType=Foo"' ... ``` Use the following options to change the behavior of this command. **Flags:** **--activity-id**, **-a** _string_ The Activity ID to update options. Mutually exclusive with `--query`, `--match-all`, and `--activity-type`. Requires `--workflow-id` to be specified. **--activity-type** _string_ Activities of this Type will be updated. Mutually exclusive with `--match-all` and `activity-id`. **--heartbeat-timeout** _duration_ Maximum permitted time between successful worker heartbeats. **--match-all** _bool_ Every activity should be updated. Mutually exclusive with `--activity-id` and `--activity-type`. **--query**, **-q** _string_ Content for an SQL-like `QUERY` List Filter. You must set either --workflow-id or --query. **--reason** _string_ Reason for batch operation. Only use with --query. Defaults to user name. **--restore-original-options** _bool_ Restore the original options of the activity. **--retry-backoff-coefficient** _float_ Coefficient used to calculate the next retry interval. The next retry interval is previous interval multiplied by the backoff coefficient. Must be 1 or larger. **--retry-initial-interval** _duration_ Interval of the first retry. If retryBackoffCoefficient is 1.0 then it is used for all retries. **--retry-maximum-attempts** _int_ Maximum number of attempts. When exceeded the retries stop even if not expired yet. Setting this value to 1 disables retries. Setting this value to 0 means unlimited attempts(up to the timeouts). **--retry-maximum-interval** _duration_ Maximum interval between retries. Exponential backoff leads to interval increase. This value is the cap of the increase. **--rps** _float_ Limit batch's requests per second. Only allowed if query is present. **--run-id**, **-r** _string_ Run ID. Only use with --workflow-id. Cannot use with --query. **--schedule-to-close-timeout** _duration_ Indicates how long the caller is willing to wait for an activity completion. Limits how long retries will be attempted. **--schedule-to-start-timeout** _duration_ Limits time an activity task can stay in a task queue before a worker picks it up. This timeout is always non retryable, as all a retry would achieve is to put it back into the same queue. Defaults to the schedule-to-close timeout or workflow execution timeout if not specified. **--start-to-close-timeout** _duration_ Maximum time an activity is allowed to execute after being picked up by a worker. This timeout is always retryable. **--task-queue** _string_ Name of the task queue for the Activity. **--workflow-id**, **-w** _string_ Workflow ID. You must set either --workflow-id or --query. **--yes**, **-y** _bool_ Don't prompt to confirm signaling. Only allowed when --query is present. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. --- ## Temporal CLI batch command reference {/* NOTE: This is an auto-generated file. Any edit to this file will be overwritten. This file is generated from https://github.com/temporalio/cli/blob/main/temporalcli/commandsgen/commands.yml */} ## describe Show the progress of an ongoing batch job. Pass a valid job ID to display its information: ``` temporal batch describe \ --job-id YourJobId ``` Use the following options to change the behavior of this command. **Flags:** **--job-id** _string_ Batch job ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## list Return a list of batch jobs on the Service or within a single Namespace. For example, list the batch jobs for "YourNamespace": ``` temporal batch list \ --namespace YourNamespace ``` Use the following options to change the behavior of this command. **Flags:** **--limit** _int_ Maximum number of batch jobs to display. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## terminate Terminate a batch job with the provided job ID. You must provide a reason for the termination. The Service stores this explanation as metadata for the termination event for later reference: ``` temporal batch terminate \ --job-id YourJobId \ --reason YourTerminationReason ``` Use the following options to change the behavior of this command. **Flags:** **--job-id** _string_ Job ID to terminate. Required. **--reason** _string_ Reason for terminating the batch job. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. --- ## Temporal CLI command options reference ## active-cluster Active cluster name. ## activity-id Identifies the Activity Execution. ## activity-jitter If set, the Activity will start at a random time within the specified jitter duration. ## activity-type Command is applied to the all running activities with of this type. ## address The host and port (formatted as host:port) for the Temporal Frontend Service. ## api-key API key for request. ## archived List archived Workflow Executions. :::note Caution: `--archived` is experimental. ::: ## build-id Identifies the build to retrieve reachability information for. Can be specified multiple times. ## calendar Calendar specification in JSON `({"dayOfWeek":"Fri","hour":"17","minute":"5"})` or as a Cron string `("30 2 \* \* 5" or "@daily")`. ## catchup-window Maximum allowed catch-up time if server is down. ## cluster Cluster name. ## codec-auth Sets the authorization header on requests to the Codec Server. ## codec-endpoint Endpoint for a remote Codec Server. ## color When to use color: auto, always, never. (default: auto) ## command-timeout The command execution timeout. 0s means no timeout. ## concurrency Request concurrency. ## cron The Cron schedule can be formatted like the following: ```text ┌───────────── minute (0 - 59) │ ┌───────────── hour (0 - 23) │ │ ┌───────────── day of the month (1 - 31) │ │ │ ┌───────────── month (1 - 12) │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday) │ │ │ │ │ * * * * * ``` ## data Namespace data in a key=value format. Values must be in JSON format. ## db-filename File in which to persist Temporal state. By default, Workflows are lost when the process dies. ## depth The number of Child Workflows to fetch and expand. Use `-1` to fetch Child Workflows at any depth. ## description Namespace description or Nexus Endpoint description. You may use Markdown formatting in the Nexus Endpoint description. ## description-file Path to the Nexus Endpoint description file. The contents of the description file may use Markdown formatting. ## detail A provided reason for failing an Activity. ## dry-run Simulate reset without resetting any Workflow Executions. ## dynamic-config-value Dynamic config value, formatted as `KEY=JSON_VALUE`. String values require quotation marks. ## email Owner email. ## enable-connection Enable cross-cluster connection. ## end-time Backfill end time. ## env Name of the environment to read environment variables from. ## env-file Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. ## event-id The Event Id for any Event after WorkflowTaskStarted you want to reset to (exclusive). It can be WorkflowTaskCompleted, WorkflowTaskFailed or others. ## exclude-file Input file that specifies Workflow Executions to exclude from resetting. ## execution-timeout Timeout (in seconds) for a [Workflow Execution](/workflow-execution), including retries and `ContinueAsNew` tasks. ## existing-compatible-build-id A Build Id that already exists in the version sets known by the Task Queue. New Build Ids are stored in the version set containing this Id, making them compatible with the versions in that set. ## fields Customize fields to print. Set to 'long' to automatically print more of main fields. ## first-execution-run-id Run update on the last execution in the chain that started with this Run Id. ## fold Statuses for which Child Workflows will be folded in (this will reduce the number of information fetched and displayed). Case-insensitive and ignored if `--no-fold` is supplied. ## follow Follow the progress of a Workflow Execution. ## frontend-address Frontend address of the remote Cluster. ## global Flag to indicate whether a Namespace is a Global Namespace. ## grpc-meta Contains gRPC metadata to send with requests (format: `key=value`). Values must be in a valid JSON format. ## headless Disable the Web UI. ## heartbeat-timeout Maximum permitted time between successful Worker Heartbeats. ## history-archival-state Sets the history archival state. Valid values are "disabled" and "enabled". ## history-uri Optionally specify history archival URI (cannot be changed after first time archival is enabled). ## id-reuse-policy Allows the same Workflow Id to be used in a new Workflow Execution. Options: AllowDuplicate, AllowDuplicateFailedOnly, RejectDuplicate, TerminateIfRunning. ## identity Specify operator's identity. ## input Use the `--input` command option to include data in the command. This command option accepts a valid JSON string. If the entity that the command is acting on accepts multiple parameters, pass "null" for null values within the JSON string. The following is an example of starting a Workflow with the `--input` command option. This Workflow expects a single string as a parameter: ```shell temporal workflow start --input '"+1 555-555-5555"' ``` ## input-file Passes optional input for the Workflow from a JSON file. If there are multiple JSON files, concatenate them and separate by space or newline. Input from the command line will overwrite file input. ## input-parallelism Number of goroutines to run in parallel. Each goroutine processes one line for every second. ## input-separator Separator for the input file. The default is a tab (`\t`). ## interval Interval duration, such as 90m, or 90m/13m to include phase offset. ## ip IPv4 address to bind the frontend service to. (default: 127.0.0.1) ## jitter Jitter duration. ## job-id Batch Job Id. ## keep-paused If this flag is provided and Activity was paused, it will stay paused after reset. ## limit Number of items to print on a page. ## log-format Set the log formatting. Options: ["json", "pretty"]. ## log-level Set the log level. Options: ["debug" "info" "warn" "error" "fatal"]. ## match-all If set, all currently running activities will be unpaused. ## max-field-length Maximum length for each attribute field. ## max-sets Limits how many compatible sets will be returned. Specify 1 to return only the current default major version set. 0 returns all sets. ## memo Set a memo on a schedule (format: key=value). Use valid JSON formats for value. ## memo-file Set a memo from a file. Each line should follow the format key=value. Use valid JSON formats for value. ## metrics-port Port for `/metrics`. Enabled by default with a randomly assigned port. ## name Frontend address of the remote Cluster or the Endpoint name. ## namespace Identifies a Namespace in the Temporal Workflow. ## namespace-id Namespace Id. ## no-fold Disable folding. All Child Workflows within the set depth will be fetched and displayed. ## no-json-shorthand-payloads Raw payload output, even if the JSON option was used. ## no-pager Disables the interactive pager. ## non-deterministic Reset Workflow Execution only if its last Event is `WorkflowTaskFailed` with a nondeterminism error. ## notes Initial value of notes field. ## output Format of output. Options: table, json, card. ## overlap-policy Overlap policy. Options: Skip, BufferOne, BufferAll, CancelOther, TerminateOther, AllowAll. ## pager Sets the pager for the Temporal CLI to use. Options are less, more, and favoritePager. ## pause Pauses the Schedule. ## pause-on-failure Pause schedule after any Workflow failure. ## port Port for the frontend gRPC service. ## promote-global Promote local Namespace to Global Namespace. ## query Provides a SQL-like Query of Search Attributes to return Workflow Executions to reset. For more information, refer to the [`temporal workflow list`](/cli/workflow#list) command. ## raw Print raw data in a JSON format. For scripting, we recommend using this option instead of `-o json`. ## reachability-type Specify how you'd like to filter the reachability of Build IDs. The following are valid choices: - `open`: reachable by one or more open Workflows. - `closed`: reachable by one or more closed Workflows. - `existing`: reachable by either open or closed Workflows. Build IDs that are reachable by new Workflows are always reported. ## reapply-type Event types to reapply after the reset point. Options: Signal, None. ## reason Reason for the operation. ## reject-condition Optional flag for rejecting Queries based on Workflow state. Valid values are "not_open" and "not_completed_cleanly". ## remaining-actions Total number of actions allowed. ## reset-attempts Providing this flag will reset the number of attempts. ## reset-heartbeat Providing this flag will reset the heartbeat details. ## reset-points Only show Workflow Events that are eligible for reset. ## result Set the result value of Activity completion. ## retention Workflow Execution retention. ## retry-backoff-coefficient Coefficient used to calculate the next retry interval. The next retry interval is previous interval multiplied by the coefficient. Must be 1 or larger. ## retry-initial-interval Interval of the first retry. If retryBackoffCoefficient is 1.0 then it is used for all retries. ## retry-maximum-attempts Maximum number of attempts. When exceeded the retries stop even if not expired yet. 1 disables retries. 0 means unlimited (up to the timeouts). ## retry-maximum-interval Maximum interval between retries. Exponential backoff leads to interval increase. This value is the cap of the increase. Default is 100x of the initial interval. ## run-id Identifies the current Workflow Run. ## run-timeout Timeout (in seconds) of a single Workflow run. ## schedule-id Schedule Id. ## schedule-to-close-timeout Indicates how long the caller is willing to wait for an Activity completion. Limits how long retries will be attempted. ## schedule-to-start-timeout Limits time an Activity Task can stay in a task queue before a Worker picks it up. This timeout is always non retryable, as all a retry would achieve is to put it back into the same queue. Defaults to `schedule_to_close_timeout` or workflow execution timeout if not specified. ## search-attribute Set Search Attribute on a Schedule (formatted as `key=value`). Use valid JSON formats for value. ## set-as-default When set, establishes the compatible set being targeted as the default for the Task Queue. If a different set is the current default, the targeted set replaces it. ## skip-base-is-not-current Skip a Workflow Execution if the base Workflow Run is not the current Workflow Run. ## skip-current-open Skip a Workflow Execution if the current Run is open for the same Workflow Id as the base Run. ## sqlite-pragma Specify sqlite pragma statements in pragma=value format. Pragma options: ["journal_mode" "synchronous"]. ## start-delay Specify a delay before the workflow starts. ## start-time Backfill start time. ## start-to-close-timeout Maximum time an Activity is allowed to execute after being picked up by a Worker. This Timeout is always retryable. ## target-namespace Namespace in which a handler Worker will poll for Nexus tasks. ## target-task-queue Task Queue in which a handler Worker will poll for Nexus tasks. ## target-url An external Nexus Endpoint where Nexus requests are forwarded to. May be used as an alternative to `--target-namespace` and `--target-task-queue`. :::note Caution: `--target-url` is experimental. ::: ## task-queue Task Queue. ## task-queue-type Task Queue type, which can be either Workflow or Activity. The default type is Workflow. ## task-timeout Start-to-close timeout for a Workflow Task (in seconds). ## time-format Format time as: relative, iso, raw. ## time-zone Time zone (IANA name). ## tls Enable TLS encryption without additional options such as mTLS or client certificates. ## tls-ca-data Data for server CA certificate. Can't be used with --tls-ca-path. ## tls-ca-path Path to server CA certificate. ## tls-cert-data Data for x509 certificate. Can't be used with --tls-cert-path. ## tls-cert-path Path to x509 certificate. ## tls-disable-host-verification Disables TLS host name verification. ## tls-key-data Private certificate key data. Can't be used with --tls-key-path. ## tls-key-path Path to private certificate key. ## tls-server-name Overrides the target TLS server name. ## type Search attribute type. Options: Text, Keyword, Int, Double, Bool, Datetime, KeywordList. ## ui-asset-path UI Custom Assets path. ## ui-codec-endpoint UI Remote data converter HTTP endpoint. ## ui-ip IPv4 address to bind the Web UI to. ## ui-port Port for the Web UI. Default: `--port` + 1000 (for example, 4000). ## unpause Unpauses the Schedule. ## unset-description Unset the description. ## verbose Print applied Namespace changes. ## visibility-archival-state Visibility Archival state. Valid values: "disabled", "enabled". ## visibility-uri Specify URI for Visibility Archival. This cannot be changed after Archival is enabled. ## workflow-id Workflow Id. ## workflow-type Workflow type name. ## yes Confirm all prompts. --- ## Temporal CLI config command reference {/* NOTE: This is an auto-generated file. Any edit to this file will be overwritten. This file is generated from https://github.com/temporalio/cli/blob/main/temporalcli/commandsgen/commands.yml */} ## delete Remove a property within a profile. ``` temporal env delete \ --prop tls.client_cert_path ``` Use the following options to change the behavior of this command. **Flags:** **--prop**, **-p** _string_ Specific property to delete. If unset, deletes entire profile. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporalio/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## delete-profile Remove a full profile entirely. The `--profile` must be set explicitly. ``` temporal env delete-profile \ --profile my-profile ``` Use the following options to change the behavior of this command. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## get Display specific properties or the entire profile. ``` temporal config get \ --prop address ``` or ``` temporal config get ``` Use the following options to change the behavior of this command. **Flags:** **--prop**, **-p** _string_ Specific property to get. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## list List profile names in the config file. ``` temporal config list ``` Use the following options to change the behavior of this command. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## set Assign a value to a property and store it in the config file: ``` temporal config set \ --prop address \ --value us-west-2.aws.api.temporal.io:7233 ``` Use the following options to change the behavior of this command. **Flags:** **--prop**, **-p** _string_ Property name. Required. **--value**, **-v** _string_ Property value. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. --- ## Temporal CLI env command reference {/* NOTE: This is an auto-generated file. Any edit to this file will be overwritten. This file is generated from https://github.com/temporalio/cli/blob/main/temporalcli/commandsgen/commands.yml */} ## delete Remove a presets environment entirely _or_ remove a key-value pair within an environment. If you don't specify an environment (with `--env` or by setting the `TEMPORAL_ENV` variable), this command updates the "default" environment: ``` temporal env delete \ --env YourEnvironment ``` or ``` temporal env delete \ --env prod \ --key tls-key-path ``` Use the following options to change the behavior of this command. **Flags:** **--key**, **-k** _string_ Property name. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## get List the properties for a given environment: ``` temporal env get \ --env YourEnvironment ``` Print a single property: ``` temporal env get \ --env YourEnvironment \ --key YourPropertyKey ``` If you don't specify an environment (with `--env` or by setting the `TEMPORAL_ENV` variable), this command lists properties of the "default" environment. Use the following options to change the behavior of this command. **Flags:** **--key**, **-k** _string_ Property name. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## list List the environments you have set up on your local computer. Environments are stored in "$HOME/.config/temporalio/temporal.yaml". Use the following options to change the behavior of this command. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## set Assign a value to a property key and store it to an environment: ``` temporal env set \ --env environment \ --key property \ --value value ``` If you don't specify an environment (with `--env` or by setting the `TEMPORAL_ENV` variable), this command sets properties in the "default" environment. Storing keys with CLI option names lets the CLI automatically set those options for you. This reduces effort and helps avoid typos when issuing commands. Use the following options to change the behavior of this command. **Flags:** **--key**, **-k** _string_ Property name (required). **--value**, **-v** _string_ Property value (required). **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. --- ## Temporal CLI command reference The Temporal CLI provides direct access to a Temporal Service via the terminal. It's a powerful tool for managing, monitoring, and debugging Temporal Applications. You can use it to start, stop, inspect and operate on Workflows and Activities, and perform administrative tasks such as Namespace, Schedule, and Task Queue management. The Temporal CLI also includes an embedded Temporal Service suitable for use in development and CI/CD. It includes the [Temporal Server](/temporal-service/temporal-server), SQLite persistence, and the [Temporal Web UI](/web-ui). :::note When upgrading from [tctl](/tctl-v1) to the Temporal CLI, make sure to update your environment variables and use updated commands. For details, see [CLI release notes](https://github.com/temporalio/cli/releases/). ::: ## Install the Temporal CLI {#install} The Temporal CLI is available on macOS, Windows, and Linux, or as a Docker image. ### macOS Choose one of the following install methods to install the Temporal CLI on macOS: - Install the Temporal CLI with Homebrew. ```shell brew install temporal ``` - Install the Temporal CLI from CDN. 1. Select the platform and architecture needed. - Download for Darwin amd64: https://temporal.download/cli/archive/latest?platform=darwin&arch=amd64 - Download for Darwin arm64: https://temporal.download/cli/archive/latest?platform=darwin&arch=arm64 2. Extract the downloaded archive. 3. Add the Temporal CLI binary to your PATH. ### Linux Choose one of the following install methods to install the Temporal CLI on Linux: - Install the Temporal CLI from CDN. 1. Select the platform and architecture needed. - Download for Linux amd64: https://temporal.download/cli/archive/latest?platform=linux&arch=amd64 - Download for Linux arm64: https://temporal.download/cli/archive/latest?platform=linux&arch=arm64 2. Extract the downloaded archive. 3. Add the `temporal` binary to your PATH. ### Windows Choose one of the following methods to install the Temporal CLI on Windows: - Install the Temporal CLI from CDN. 1. Select the platform and architecture needed and download the binary. - Download for Windows amd64: https://temporal.download/cli/archive/latest?platform=windows&arch=amd64 - Download for Windows arm64: https://temporal.download/cli/archive/latest?platform=windows&arch=arm64 2. Extract the downloaded archive. 3. Add the `temporal.exe` binary to your PATH. ### Docker The Temporal CLI container image is available on [DockerHub](https://hub.docker.com/r/temporalio/temporal) and can be run directly: ```shell docker run --rm temporalio/temporal --help ``` :::note When running the Temporal CLI inside Docker, for the development server to be accessible from the host system, the server needs to be configured to listen on external IP and the ports need to be forwarded: ```shell docker run --rm -p 7233:7233 -p 8233:8233 temporalio/temporal server start-dev --ip 0.0.0.0 --- # UI is now accessible from host at http://localhost:8233/ ``` ::: ## Start a Temporal development server {#start-dev-server} To start a Temporal development server, run the following command: ```bash temporal server start-dev ``` This command automatically starts the Web UI, creates the `default` [Namespace](/namespaces), and uses an in-memory database. The Temporal Server will be available on `localhost:7233` and the Temporal Web UI will be available at [`http://localhost:8233`](http://localhost:8233/). The in-memory SQLite database does not persist if you stop the development server. Use the `--db-filename` option to specify a database file, persisting application state. This is helpful if you plan on stopping and re-starting the development server. ```shell temporal server start-dev --db-filename temporal.db ``` :::note Local databases created with `--db-filename` may not be compatible with newer versions of the Temporal CLI. The `temporal server start-dev` command is only intended for development environments. ::: For the full list of development server options, use the `--help` flag: ```shell temporal server start-dev --help ``` ## Enable auto-completion {#enable-auto-completion} Enable auto-completion using the following commands. ### zsh auto-completion 1. Add the following line to your `~/.zshrc` startup script: ```sh eval "$(temporal completion zsh)" ``` 2. Re-launch your shell or run: ```sh source ~/.zshrc ``` ### Bash auto-completion 1. Install [bash-completion](https://github.com/scop/bash-completion#installation) and add the software to your `~/.bashrc`. 2. Add the following line to your `~/.bashrc` startup script: ```sh eval "$(temporal completion bash)" ``` 3. Re-launch your shell or run: ```sh source ~/.bashrc ``` :::note If auto-completion fails with the error: `bash: _get_comp_words_by_ref: command not found`, you did not successfully install [bash-completion](https://github.com/scop/bash-completion#installation). This package must be loaded into your shell for `temporal` auto-completion to work. ::: ### Fish auto-completion 1. Create the Fish custom completions directory if it does not already exist: ```fish mkdir -p ~/.config/fish/completions ``` 2. Configure the completions to load when needed. Note: the file name must be `temporal.fish` or the completions will not be found: ```fish echo 'eval "$(temporal completion fish)"' >~/.config/fish/completions/temporal.fish ``` 3. Re-launch your shell or run: ```fish source ~/.config/fish/completions/temporal.fish ``` ## Command set - [temporal activity](/cli/activity/) - [temporal batch](/cli/batch/) - [temporal env](/cli/env/) - [temporal operator](/cli/operator/) - [temporal schedule](/cli/schedule/) - [temporal server](/cli/server) - [temporal task-queue](/cli/task-queue/) - [temporal workflow](/cli/workflow/) ## Configuration The following information provides important configuration details. ### Namespace registration Namespaces are pre-registered at startup for immediate use. Customize pre-registered Namespaces with the following command: ```shell temporal server start-dev --namespace foo --namespace bar ``` Register Namespaces with `namespace create`: ```shell temporal operator namespace create --namespace foo ``` ### Enable or disable Temporal UI By default, the Temporal UI is enabled when running the development server using the Temporal CLI. To disable the UI, use the `--headless` modifier: ```shell temporal server start-dev --headless ``` ### Dynamic configuration Advanced Temporal CLI configuration requires a dynamic configuration file. To set values on the command line, use `--dynamic-config-value KEY=JSON_VALUE`. For example, enable the Search Attribute cache: ```bash temporal server start-dev --dynamic-config-value system.forceSearchAttributesCacheRefreshOnRead=false ``` This setting makes created Search Attributes immediately available. ## Environment variables The following table describes the environment variables you can set for the Temporal CLI. | Variable | Definition | Client Option | | ---------------------------------------- | ------------------------------------------------------------------------- | ------------------------------- | | `TEMPORAL_ADDRESS` | Host and port (formatted as host:port) for the Temporal Frontend Service. | --address | | `TEMPORAL_CODEC_AUTH` | Authorization header for requests to Codec Server. | --codec-auth | | `TEMPORAL_CODEC_ENDPOINT` | Endpoint for remote Codec Server. | --codec-endpoint | | `TEMPORAL_NAMESPACE` | Namespace in Temporal Workflow. Default: "default". | --namespace | | `TEMPORAL_TLS_CA` | Path to server CA certificate. | --tls-ca-path | | `TEMPORAL_TLS_CERT` | Path to x509 certificate. | --tls-cert-path | | `TEMPORAL_TLS_DISABLE_HOST_VERIFICATION` | Disables TLS host name verification. Default: false. | --tls-disable-host-verification | | `TEMPORAL_TLS_KEY` | Path to private certificate key. | --tls-key-path | | `TEMPORAL_TLS_SERVER_NAME` | Override for target TLS server name. | --tls-server-name | | `TEMPORAL_API_KEY` | API key used for authentication. | --api-key | :::tip ENVIRONMENT VARIABLES Do not confuse environment variables, set with your shell, with temporal env options. ::: ## Create and modify configuration files {#configuration-files} The Temporal CLI lets you create and modify TOML configuration files to store your environment variables and other settings. Refer to [Environment Configuration](../develop/environment-configuration#cli-integration) for more information. ## Proxy support The Temporal CLI provides support for users who are operating behind a proxy. This feature ensures seamless communication even in network-restricted environments. #### Setting up proxy support If you are behind a proxy, you'll need to instruct the Temporal CLI to route its requests via that proxy. You can achieve this by setting the `HTTPS_PROXY` environment variable. ```command export HTTPS_PROXY=: ``` Replace `` with the proxy's hostname or IP address, and `` with the proxy's port number. Once set, you can run the Temporal CLI commands as you normally would. :::note Temporal CLI uses the gRPC library which natively supports HTTP CONNECT proxies. The gRPC library checks for the `HTTPS_PROXY` (and its case-insensitive variants) environment variable to determine if it should route requests through a proxy. ::: In addition to `HTTPS_PROXY`, gRPC also respects the `NO_PROXY` environment variable. This can be useful if there are specific addresses or domains you wish to exclude from proxying. For more information, see [Proxy](https://github.com/grpc/grpc-go/blob/master/Documentation/proxy.md) in the gRPC documentation. ## Common CLI operations {#common-operations} The following are some of the more common operations you can perform with the Temporal CLI. ### Start a Workflow In another terminal, use the following commands to interact with the Server. The following command starts a Workflow: ```shell $ temporal workflow start \ --task-queue hello-world \ --type MyWorkflow \ --workflow-id 123 \ --input 456 Running execution: WorkflowId 123 RunId 357074e4-0dd8-4c44-8367-d92536dd0943 Type MyWorkflow Namespace default TaskQueue hello-world Args [456] ``` Shorthand options are available: ```shell temporal workflow start --task-queue hello-world --type MyWorkflow --workflow-id 123 --input 456 ``` You can also list and describe Workflows: ```shell $ temporal workflow list Status WorkflowId Name StartTime Running 123 MyWorkflow 14 seconds ago $ temporal workflow describe --workflow-id 123 { "executionConfig": { "taskQueue": { "name": "hello-world", "kind": "Normal" }, "workflowExecutionTimeout": "0s", "workflowRunTimeout": "0s", "defaultWorkflowTaskTimeout": "10s" }, "workflowExecutionInfo": { "execution": { "workflowId": "123", "runId": "357074e4-0dd8-4c44-8367-d92536dd0943" }, "type": { "name": "MyWorkflow" }, "startTime": "2023-04-15T06:42:31.191137Z", "status": "Running", "historyLength": "2", "executionTime": "2023-04-15T06:42:31.191137Z", "memo": { }, "autoResetPoints": { }, "stateTransitionCount": "1" }, "pendingWorkflowTask": { "state": "Scheduled", "scheduledTime": "2023-04-15T06:42:31.191173Z", "originalScheduledTime": "2023-04-15T06:42:31.191173Z", "attempt": 1 } } ``` For more detailed output in JSON format, use the following command: ```shell $ temporal workflow list --output json [ { "execution": { "workflow_id": "123", "run_id": "357074e4-0dd8-4c44-8367-d92536dd0943" }, "type": { "name": "MyWorkflow" }, "start_time": "2023-04-15T06:42:31.191137Z", "status": 1, "execution_time": "2023-04-15T06:42:31.191137Z", "memo": {}, "task_queue": "hello-world" } ] ``` Filter out Workflows based on Workflow Type with [jq](https://stedolan.github.io/jq/): ```shell $ temporal workflow list --output json | jq '.[].type.name' "OtherWorkflow" "MyWorkflow" "MyWorkflow" ``` To count the number of Workflows, use the following command: ```shell $ temporal workflow list --output json | jq '.[].type.name' | uniq -c 1 "OtherWorkflow" 2 "MyWorkflow" ``` To see the full range of Workflow-related commands, run `temporal workflow` or see the [Temporal CLI workflow command reference](/cli/workflow). For a full list of available commands, run `temporal` without arguments or see [Available commands](#command-set). ### Customize your environment variables To communicate with a different Server, like a production Namespace on Temporal Cloud: 1. Create an environment named `prod`. 2. Pass `--env prod` to commands, like `temporal workflow list --env prod`. To create a new environment and set its properties: ```shell temporal env set prod.namespace production.f45a2 temporal env set prod.address production.f45a2.tmprl.cloud:7233 temporal env set prod.tls-cert-path /temporal/certs/prod.pem temporal env set prod.tls-key-path /temporal/certs/prod.key ``` Check your settings: ```shell $ temporal env get prod address production.f45a2.tmprl.cloud:7233 namespace production.f45a2 tls-cert-path /temporal/certs/prod.pem tls-key-path /temporal/certs/prod.key ``` Run a command to test the connection: ```shell $ temporal workflow list --env prod ``` For a full list of properties, use `temporal env set -h`. ```shell $ temporal env set -h OPTIONS: Client Options: --address value The host and port (formatted as host:port) for the Temporal Frontend Service. [$TEMPORAL_CLI_ADDRESS] --codec-auth value Sets the authorization header on requests to the Codec Server. [$TEMPORAL_CLI_CODEC_AUTH] --codec-endpoint value Endpoint for a remote Codec Server. [$TEMPORAL_CLI_CODEC_ENDPOINT] --command-timeout duration Timeout for the span of a command. (default 0s) --env value Name of the environment to read environment variables from. (default: "default") --grpc-meta value [ --grpc-meta value ] Contains gRPC metadata to send with requests (format: key=value). Values must be in a valid JSON format. --namespace value, -n value Identifies a Namespace in the Temporal Workflow. (default: "default") [$TEMPORAL_CLI_NAMESPACE] --tls-ca-path value Path to server CA certificate. [$TEMPORAL_CLI_TLS_CA] --tls-cert-path value Path to x509 certificate. [$TEMPORAL_CLI_TLS_CERT] --tls-disable-host-verification Disables TLS host name verification if already enabled. (default: false) [$TEMPORAL_CLI_TLS_DISABLE_HOST_VERIFICATION] --tls-key-path value Path to private certificate key. [$TEMPORAL_CLI_TLS_KEY] --tls-server-name value Provides an override for the target TLS server name. [$TEMPORAL_CLI_TLS_SERVER_NAME] Display Options: --color value when to use color: auto, always, never. (default: "auto") ``` --- ## Temporal CLI operator command reference {/* NOTE: This is an auto-generated file. Any edit to this file will be overwritten. This file is generated from https://github.com/temporalio/cli/blob/main/temporalcli/commandsgen/commands.yml */} ## cluster Perform operator actions on Temporal Services (also known as Clusters). ``` temporal operator cluster [subcommand] [options] ``` For example to check Service/Cluster health: ``` temporal operator cluster health ``` ### describe View information about a Temporal Cluster (Service), including Cluster Name, persistence store, and visibility store. Add `--detail` for additional info: ``` temporal operator cluster describe [--detail] ``` Use the following options to change the behavior of this command. **Flags:** **--detail** _bool_ Show history shard count and Cluster/Service version information. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### health View information about the health of a Temporal Service: ``` temporal operator cluster health ``` Use the following options to change the behavior of this command. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### list Print a list of remote Temporal Clusters (Services) registered to the local Service. Report details include the Cluster's name, ID, address, History Shard count, Failover version, and availability: ``` temporal operator cluster list [--limit max-count] ``` Use the following options to change the behavior of this command. **Flags:** **--limit** _int_ Maximum number of Clusters to display. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### remove Remove a registered remote Temporal Cluster (Service) from the local Service. ``` temporal operator cluster remove \ --name YourClusterName ``` Use the following options to change the behavior of this command. **Flags:** **--name** _string_ Cluster/Service name. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### system Show Temporal Server information for Temporal Clusters (Service): Server version, scheduling support, and more. This information helps diagnose problems with the Temporal Server. The command defaults to the local Service. Otherwise, use the `--frontend-address` option to specify a Cluster (Service) endpoint: ``` temporal operator cluster system \ --frontend-address "YourRemoteEndpoint:YourRemotePort" ``` Use the following options to change the behavior of this command. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### upsert Add, remove, or update a registered ("remote") Temporal Cluster (Service). ``` temporal operator cluster upsert [options] ``` For example: ``` temporal operator cluster upsert \ --frontend-address "YourRemoteEndpoint:YourRemotePort" --enable-connection false ``` Use the following options to change the behavior of this command. **Flags:** **--enable-connection** _bool_ Set the connection to "enabled". **--frontend-address** _string_ Remote endpoint. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## namespace Manage Temporal Cluster (Service) Namespaces: ``` temporal operator namespace [command] [command options] ``` For example: ``` temporal operator namespace create \ --namespace YourNewNamespaceName ``` ### create Create a new Namespace on the Temporal Service: ``` temporal operator namespace create \ --namespace YourNewNamespaceName \ [options] ```` Create a Namespace with multi-region data replication: ``` temporal operator namespace create \ --global \ --namespace YourNewNamespaceName ``` Configure settings like retention and Visibility Archival State as needed. For example, the Visibility Archive can be set on a separate URI: ``` temporal operator namespace create \ --retention 5d \ --visibility-archival-state enabled \ --visibility-uri YourURI \ --namespace YourNewNamespaceName ``` Note: URI values for archival states can't be changed once enabled. Use the following options to change the behavior of this command. **Flags:** **--active-cluster** _string_ Active Cluster (Service) name. **--cluster** _string[]_ Cluster (Service) names for Namespace creation. Can be passed multiple times. **--data** _string[]_ Namespace data as `KEY=VALUE` pairs. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--description** _string_ Namespace description. **--email** _string_ Owner email. **--global** _bool_ Enable multi-region data replication. **--history-archival-state** _string-enum_ History archival state. Accepted values: disabled, enabled. (default "disabled") **--history-uri** _string_ Archive history to this `URI`. Once enabled, can't be changed. **--retention** _duration_ Time to preserve closed Workflows before deletion. (default "72h") **--visibility-archival-state** _string-enum_ Visibility archival state. Accepted values: disabled, enabled. (default "disabled") **--visibility-uri** _string_ Archive visibility data to this `URI`. Once enabled, can't be changed. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### delete Removes a Namespace from the Service. ``` temporal operator namespace delete [options] ``` For example: ``` temporal operator namespace delete \ --namespace YourNamespaceName ``` Use the following options to change the behavior of this command. **Flags:** **--yes**, **-y** _bool_ Request confirmation before deletion. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### describe Provide long-form information about a Namespace identified by its ID or name: ``` temporal operator namespace describe \ --namespace-id YourNamespaceId ``` or ``` temporal operator namespace describe \ --namespace YourNamespaceName ``` Use the following options to change the behavior of this command. **Flags:** **--namespace-id** _string_ Namespace ID. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### list Display a detailed listing for all Namespaces on the Service: ``` temporal operator namespace list ``` Use the following options to change the behavior of this command. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### update Update a Namespace using properties you specify. ``` temporal operator namespace update [options] ``` Assign a Namespace's active Cluster (Service): ``` temporal operator namespace update \ --namespace YourNamespaceName \ --active-cluster NewActiveCluster ``` Promote a Namespace for multi-region data replication: ``` temporal operator namespace update \ --namespace YourNamespaceName \ --promote-global ``` You may update archives that were previously enabled or disabled. Note: URI values for archival states can't be changed once enabled. ``` temporal operator namespace update \ --namespace YourNamespaceName \ --history-archival-state enabled \ --visibility-archival-state disabled ``` Use the following options to change the behavior of this command. **Flags:** **--active-cluster** _string_ Active Cluster (Service) name. **--cluster** _string[]_ Cluster (Service) names. **--data** _string[]_ Namespace data as `KEY=VALUE` pairs. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--description** _string_ Namespace description. **--email** _string_ Owner email. **--history-archival-state** _string-enum_ History archival state. Accepted values: disabled, enabled. **--history-uri** _string_ Archive history to this `URI`. Once enabled, can't be changed. **--promote-global** _bool_ Enable multi-region data replication. **--replication-state** _string-enum_ Replication state. Accepted values: normal, handover. **--retention** _duration_ Length of time a closed Workflow is preserved before deletion. **--visibility-archival-state** _string-enum_ Visibility archival state. Accepted values: disabled, enabled. **--visibility-uri** _string_ Archive visibility data to this `URI`. Once enabled, can't be changed. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## nexus These commands manage Nexus resources. Nexus commands follow this syntax: ``` temporal operator nexus [command] [subcommand] [options] ``` ### endpoint These commands manage Nexus Endpoints. Nexus Endpoint commands follow this syntax: ``` temporal operator nexus endpoint [command] [options] ``` #### create Create a Nexus Endpoint on the Server. A Nexus Endpoint name is used in Workflow code to invoke Nexus Operations. The endpoint target may either be a Worker, in which case `--target-namespace` and `--target-task-queue` must both be provided, or an external URL, in which case `--target-url` must be provided. This command will fail if an Endpoint with the same name is already registered. ``` temporal operator nexus endpoint create \ --name your-endpoint \ --target-namespace your-namespace \ --target-task-queue your-task-queue \ --description-file DESCRIPTION.md ``` Use the following options to change the behavior of this command. **Flags:** **--description** _string_ Nexus Endpoint description. You may use Markdown formatting in the Nexus Endpoint description. **--description-file** _string_ Path to the Nexus Endpoint description file. The contents of the description file may use Markdown formatting. **--name** _string_ Endpoint name. Required. **--target-namespace** _string_ Namespace where a handler Worker polls for Nexus tasks. **--target-task-queue** _string_ Task Queue that a handler Worker polls for Nexus tasks. **--target-url** _string_ An external Nexus Endpoint that receives forwarded Nexus requests. May be used as an alternative to `--target-namespace` and `--target-task-queue`. :::note Option is experimental. ::: **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. #### delete Delete a Nexus Endpoint from the Server. ``` temporal operator nexus endpoint delete --name your-endpoint ``` Use the following options to change the behavior of this command. **Flags:** **--name** _string_ Endpoint name. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. #### get Get a Nexus Endpoint by name from the Server. ``` temporal operator nexus endpoint get --name your-endpoint ``` Use the following options to change the behavior of this command. **Flags:** **--name** _string_ Endpoint name. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. #### list List all Nexus Endpoints on the Server. ``` temporal operator nexus endpoint list ``` Use the following options to change the behavior of this command. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. #### update Update an existing Nexus Endpoint on the Server. A Nexus Endpoint name is used in Workflow code to invoke Nexus Operations. The Endpoint target may either be a Worker, in which case `--target-namespace` and `--target-task-queue` must both be provided, or an external URL, in which case `--target-url` must be provided. The Endpoint is patched; existing fields for which flags are not provided are left as they were. Update only the target task queue: ``` temporal operator nexus endpoint update \ --name your-endpoint \ --target-task-queue your-other-queue ``` Update only the description: ``` temporal operator nexus endpoint update \ --name your-endpoint \ --description-file DESCRIPTION.md ``` Use the following options to change the behavior of this command. **Flags:** **--description** _string_ Nexus Endpoint description. You may use Markdown formatting in the Nexus Endpoint description. **--description-file** _string_ Path to the Nexus Endpoint description file. The contents of the description file may use Markdown formatting. **--name** _string_ Endpoint name. Required. **--target-namespace** _string_ Namespace where a handler Worker polls for Nexus tasks. **--target-task-queue** _string_ Task Queue that a handler Worker polls for Nexus tasks. **--target-url** _string_ An external Nexus Endpoint that receives forwarded Nexus requests. May be used as an alternative to `--target-namespace` and `--target-task-queue`. :::note Option is experimental. ::: **--unset-description** _bool_ Unset the description. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## search-attribute Create, list, or remove Search Attributes fields stored in a Workflow Execution's metadata: ``` temporal operator search-attribute create \ --name YourAttributeName \ --type Keyword ``` Supported types include: Text, Keyword, Int, Double, Bool, Datetime, and KeywordList. If you wish to delete a Search Attribute, please contact support at https://support.temporal.io. ### create Add one or more custom Search Attributes: ``` temporal operator search-attribute create \ --name YourAttributeName \ --type Keyword ``` Use the following options to change the behavior of this command. **Flags:** **--name** _string[]_ Search Attribute name. Required. **--type** _string-enum[]_ Search Attribute type. Required. Accepted values: Text, Keyword, Int, Double, Bool, Datetime, KeywordList. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### list Display a list of active Search Attributes that can be assigned or used with Workflow Queries. You can manage this list and add attributes as needed: ``` temporal operator search-attribute list ``` Use the following options to change the behavior of this command. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### remove Remove custom Search Attributes from the options that can be assigned or used with Workflow Queries. ``` temporal operator search-attribute remove \ --name YourAttributeName ``` Remove attributes without confirmation: ``` temporal operator search-attribute remove \ --name YourAttributeName \ --yes ``` Use the following options to change the behavior of this command. **Flags:** **--name** _string[]_ Search Attribute name. Required. **--yes**, **-y** _bool_ Don't prompt to confirm removal. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. --- ## Temporal CLI schedule command reference {/* NOTE: This is an auto-generated file. Any edit to this file will be overwritten. This file is generated from https://github.com/temporalio/cli/blob/main/temporalcli/commandsgen/commands.yml */} ## backfill Batch-execute actions that would have run during a specified time interval. Use this command to fill in Workflow runs from when a Schedule was paused, before a Schedule was created, from the future, or to re-process a previously executed interval. Backfills require a Schedule ID and the time period covered by the request. It's best to use the `BufferAll` or `AllowAll` policies to avoid conflicts and ensure no Workflow Executions are skipped. For example: ``` temporal schedule backfill \ --schedule-id "YourScheduleId" \ --start-time "2022-05-01T00:00:00Z" \ --end-time "2022-05-31T23:59:59Z" \ --overlap-policy BufferAll ``` The policies include: * **AllowAll**: Allow unlimited concurrent Workflow Executions. This significantly speeds up the backfilling process on systems that support concurrency. You must ensure running Workflow Executions do not interfere with each other. * **BufferAll**: Buffer all incoming Workflow Executions while waiting for the running Workflow Execution to complete. * **Skip**: If a previous Workflow Execution is still running, discard new Workflow Executions. * **BufferOne**: Same as 'Skip' but buffer a single Workflow Execution to be run after the previous Execution completes. Discard other Workflow Executions. * **CancelOther**: Cancel the running Workflow Execution and replace it with the incoming new Workflow Execution. * **TerminateOther**: Terminate the running Workflow Execution and replace it with the incoming new Workflow Execution. Use the following options to change the behavior of this command. **Flags:** **--end-time** _timestamp_ Backfill end time. Required. **--overlap-policy** _string-enum_ Policy for handling overlapping Workflow Executions. Accepted values: Skip, BufferOne, BufferAll, CancelOther, TerminateOther, AllowAll. (default "Skip") **--schedule-id**, **-s** _string_ Schedule ID. Required. **--start-time** _timestamp_ Backfill start time. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## create Create a new Schedule on the Temporal Service. A Schedule automatically starts new Workflow Executions at the times you specify. For example: ``` temporal schedule create \ --schedule-id "YourScheduleId" \ --calendar '{"dayOfWeek":"Fri","hour":"3","minute":"30"}' \ --workflow-id YourBaseWorkflowIdName \ --task-queue YourTaskQueue \ --type YourWorkflowType ``` Schedules support any combination of `--calendar`, `--interval`, and `--cron`: * Shorthand `--interval` strings. For example: 45m (every 45 minutes) or 6h/5h (every 6 hours, at the top of the 5th hour). * JSON `--calendar`, as in the preceding example. * Unix-style `--cron` strings and robfig declarations (@daily/@weekly/@every X/etc). For example, every Friday at 12:30 PM: `30 12 * * Fri`. Use the following options to change the behavior of this command. **Flags:** **--calendar** _string[]_ Calendar specification in JSON. For example: `{"dayOfWeek":"Fri","hour":"17","minute":"5"}`. **--catchup-window** _duration_ Maximum catch-up time for when the Service is unavailable. **--cron** _string[]_ Calendar specification in cron string format. For example: `"30 12 * * Fri"`. **--end-time** _timestamp_ Schedule end time. **--execution-timeout** _duration_ Fail a WorkflowExecution if it lasts longer than `DURATION`. This time-out includes retries and ContinueAsNew tasks. **--fairness-key** _string_ Fairness key (max 64 bytes) for proportional task dispatch. Tasks with same key share capacity based on their weight. **--fairness-weight** _float_ Weight [0.001-1000] for this fairness key. Keys are dispatched proportionally to their weights. **--input**, **-i** _string[]_ Input value. Use JSON content or set --input-meta to override. Can't be combined with --input-file. Can be passed multiple times to pass multiple arguments. **--input-base64** _bool_ Assume inputs are base64-encoded and attempt to decode them. **--input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --input-meta to override. Can't be combined with --input. Can be passed multiple times to pass multiple arguments. **--input-meta** _string[]_ Input payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. Repeated metadata keys are applied to the corresponding inputs in the provided order. **--interval** _string[]_ Interval duration. For example, 90m, or 60m/15m to include phase offset. **--jitter** _duration_ Max difference in time from the specification. Vary the start time randomly within this amount. **--memo** _string[]_ Memo using 'KEY="VALUE"' pairs. Use JSON values. **--notes** _string_ Initial notes field value. **--overlap-policy** _string-enum_ Policy for handling overlapping Workflow Executions. Accepted values: Skip, BufferOne, BufferAll, CancelOther, TerminateOther, AllowAll. (default "Skip") **--pause-on-failure** _bool_ Pause schedule after Workflow failures. **--paused** _bool_ Pause the Schedule immediately on creation. **--priority-key** _int_ Priority key (1-5, lower numbers = higher priority). Tasks in a queue should be processed in close-to-priority-order. Default is 3 when not specified. **--remaining-actions** _int_ Total allowed actions. Default is zero (unlimited). **--run-timeout** _duration_ Fail a Workflow Run if it lasts longer than `DURATION`. **--schedule-id**, **-s** _string_ Schedule ID. Required. **--schedule-memo** _string[]_ Set schedule memo using `KEY="VALUE` pairs. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--schedule-search-attribute** _string[]_ Set schedule Search Attributes using `KEY="VALUE` pairs. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--search-attribute** _string[]_ Search Attribute in `KEY=VALUE` format. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--start-time** _timestamp_ Schedule start time. **--static-details** _string_ Static Workflow details for human consumption in UIs. Uses Temporal Markdown formatting, may be multiple lines. :::note Option is experimental. ::: **--static-summary** _string_ Static Workflow summary for human consumption in UIs. Uses Temporal Markdown formatting, should be a single line. :::note Option is experimental. ::: **--task-queue**, **-t** _string_ Workflow Task queue. Required. **--task-timeout** _duration_ Fail a Workflow Task if it lasts longer than `DURATION`. This is the Start-to-close timeout for a Workflow Task. (default "10s") **--time-zone** _string_ Interpret calendar specs with the `TZ` time zone. For a list of time zones, see: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones. **--type** _string_ Workflow Type name. Required. **--workflow-id**, **-w** _string_ Workflow ID. If not supplied, the Service generates a unique ID. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## delete Deletes a Schedule on the front end Service: ``` temporal schedule delete \ --schedule-id YourScheduleId ``` Removing a Schedule won't affect the Workflow Executions it started that are still running. To cancel or terminate these Workflow Executions, use `temporal workflow delete` with the `TemporalScheduledById` Search Attribute instead. Use the following options to change the behavior of this command. **Flags:** **--schedule-id**, **-s** _string_ Schedule ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## describe Show a Schedule configuration, including information about past, current, and future Workflow runs: ``` temporal schedule describe \ --schedule-id YourScheduleId ``` Use the following options to change the behavior of this command. **Flags:** **--schedule-id**, **-s** _string_ Schedule ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## list Lists the Schedules hosted by a Namespace: ``` temporal schedule list \ --namespace YourNamespace ``` Use the following options to change the behavior of this command. **Flags:** **--long**, **-l** _bool_ Show detailed information. **--query**, **-q** _string_ Filter results using given List Filter. **--really-long** _bool_ Show extensive information in non-table form. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## toggle Pause or unpause a Schedule by passing a flag with your desired state: ``` temporal schedule toggle \ --schedule-id "YourScheduleId" \ --pause \ --reason "YourReason" ``` and ``` temporal schedule toggle --schedule-id "YourScheduleId" \ --unpause \ --reason "YourReason" ``` The `--reason` text updates the Schedule's `notes` field for operations communication. It defaults to "(no reason provided)" if omitted. This field is also visible on the Service Web UI. Use the following options to change the behavior of this command. **Flags:** **--pause** _bool_ Pause the Schedule. **--reason** _string_ Reason for pausing or unpausing the Schedule. (default "(no reason provided)") **--schedule-id**, **-s** _string_ Schedule ID. Required. **--unpause** _bool_ Unpause the Schedule. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## trigger Trigger a Schedule to run immediately: ``` temporal schedule trigger \ --schedule-id "YourScheduleId" ``` Use the following options to change the behavior of this command. **Flags:** **--overlap-policy** _string-enum_ Policy for handling overlapping Workflow Executions. Accepted values: Skip, BufferOne, BufferAll, CancelOther, TerminateOther, AllowAll. (default "Skip") **--schedule-id**, **-s** _string_ Schedule ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## update Update an existing Schedule with new configuration details, including time specifications, action, and policies: ``` temporal schedule update \ --schedule-id "YourScheduleId" \ --workflow-type "NewWorkflowType" ``` Use the following options to change the behavior of this command. **Flags:** **--calendar** _string[]_ Calendar specification in JSON. For example: `{"dayOfWeek":"Fri","hour":"17","minute":"5"}`. **--catchup-window** _duration_ Maximum catch-up time for when the Service is unavailable. **--cron** _string[]_ Calendar specification in cron string format. For example: `"30 12 * * Fri"`. **--end-time** _timestamp_ Schedule end time. **--execution-timeout** _duration_ Fail a WorkflowExecution if it lasts longer than `DURATION`. This time-out includes retries and ContinueAsNew tasks. **--fairness-key** _string_ Fairness key (max 64 bytes) for proportional task dispatch. Tasks with same key share capacity based on their weight. **--fairness-weight** _float_ Weight [0.001-1000] for this fairness key. Keys are dispatched proportionally to their weights. **--input**, **-i** _string[]_ Input value. Use JSON content or set --input-meta to override. Can't be combined with --input-file. Can be passed multiple times to pass multiple arguments. **--input-base64** _bool_ Assume inputs are base64-encoded and attempt to decode them. **--input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --input-meta to override. Can't be combined with --input. Can be passed multiple times to pass multiple arguments. **--input-meta** _string[]_ Input payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. Repeated metadata keys are applied to the corresponding inputs in the provided order. **--interval** _string[]_ Interval duration. For example, 90m, or 60m/15m to include phase offset. **--jitter** _duration_ Max difference in time from the specification. Vary the start time randomly within this amount. **--memo** _string[]_ Memo using 'KEY="VALUE"' pairs. Use JSON values. **--notes** _string_ Initial notes field value. **--overlap-policy** _string-enum_ Policy for handling overlapping Workflow Executions. Accepted values: Skip, BufferOne, BufferAll, CancelOther, TerminateOther, AllowAll. (default "Skip") **--pause-on-failure** _bool_ Pause schedule after Workflow failures. **--paused** _bool_ Pause the Schedule immediately on creation. **--priority-key** _int_ Priority key (1-5, lower numbers = higher priority). Tasks in a queue should be processed in close-to-priority-order. Default is 3 when not specified. **--remaining-actions** _int_ Total allowed actions. Default is zero (unlimited). **--run-timeout** _duration_ Fail a Workflow Run if it lasts longer than `DURATION`. **--schedule-id**, **-s** _string_ Schedule ID. Required. **--schedule-memo** _string[]_ Set schedule memo using `KEY="VALUE` pairs. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--schedule-search-attribute** _string[]_ Set schedule Search Attributes using `KEY="VALUE` pairs. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--search-attribute** _string[]_ Search Attribute in `KEY=VALUE` format. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--start-time** _timestamp_ Schedule start time. **--static-details** _string_ Static Workflow details for human consumption in UIs. Uses Temporal Markdown formatting, may be multiple lines. :::note Option is experimental. ::: **--static-summary** _string_ Static Workflow summary for human consumption in UIs. Uses Temporal Markdown formatting, should be a single line. :::note Option is experimental. ::: **--task-queue**, **-t** _string_ Workflow Task queue. Required. **--task-timeout** _duration_ Fail a Workflow Task if it lasts longer than `DURATION`. This is the Start-to-close timeout for a Workflow Task. (default "10s") **--time-zone** _string_ Interpret calendar specs with the `TZ` time zone. For a list of time zones, see: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones. **--type** _string_ Workflow Type name. Required. **--workflow-id**, **-w** _string_ Workflow ID. If not supplied, the Service generates a unique ID. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. --- ## Temporal CLI server command reference {/* NOTE: This is an auto-generated file. Any edit to this file will be overwritten. This file is generated from https://github.com/temporalio/cli/blob/main/temporalcli/commandsgen/commands.yml */} ## start-dev Run a development Temporal Server on your local system. ``` +------------------------------------------------------------------------+ | WARNING: The development server is not intended for production use. | | It skips certain HTTP security checks to make local use simpler. | | | | For production use, see: | | https://docs.temporal.io/production-deployment | +------------------------------------------------------------------------+ ``` View the Web UI for the default configuration at: http://localhost:8233 ``` temporal server start-dev ``` Add persistence for Workflow Executions across runs: ``` temporal server start-dev \ --db-filename path-to-your-local-persistent-store ``` Set the port from the front-end gRPC Service (7233 default): ``` temporal server start-dev \ --port 7000 ``` Use a custom port for the Web UI. The default is the gRPC port (7233 default) plus 1000 (8233): ``` temporal server start-dev \ --ui-port 3000 ``` Use the following options to change the behavior of this command. **Flags:** **--db-filename**, **-f** _string_ Path to file for persistent Temporal state store. By default, Workflow Executions are lost when the server process dies. **--dynamic-config-value** _string[]_ Dynamic configuration value using `KEY=VALUE` pairs. Keys must be identifiers, and values must be JSON values. For example: 'YourKey="YourString"'. Can be passed multiple times. **--headless** _bool_ Disable the Web UI. **--http-port** _int_ Port for the HTTP API service. Defaults to a random free port. (default "0") **--ip** _string_ IP address bound to the front-end Service. (default "localhost") **--log-config** _bool_ Log the server config to stderr. **--metrics-port** _int_ Port for the '/metrics' HTTP endpoint. Defaults to a random free port. **--namespace**, **-n** _string[]_ Namespaces to be created at launch. The "default" Namespace is always created automatically. **--port**, **-p** _int_ Port for the front-end gRPC Service. (default "7233") **--search-attribute** _string[]_ Search attributes to register using `KEY=VALUE` pairs. Keys must be identifiers, and values must be the search attribute type, which is one of the following: Text, Keyword, Int, Double, Bool, Datetime, KeywordList. **--sqlite-pragma** _string[]_ SQLite pragma statements in "PRAGMA=VALUE" format. **--ui-asset-path** _string_ UI custom assets path. **--ui-codec-endpoint** _string_ UI remote codec HTTP endpoint. **--ui-ip** _string_ IP address bound to the Web UI. Defaults to same as '--ip' value. **--ui-port** _int_ Port for the Web UI. Defaults to '--port' value + 1000. **--ui-public-path** _string_ The public base path for the Web UI. Defaults to `/`. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. --- ## Set up the Temporal CLI The Temporal CLI is a command-line tool for interacting with the Temporal Service. It helps you manage, monitor, and debug Temporal applications. You can also use it to run a local development server and interact with Temporal Applications from the command line. With the Temporal CLI, you can: - Run a local Temporal Service for development - Start Workflow Executions on any Temporal Service (local, self-hosted, or Temporal Cloud) - Interact with running Workflows - Inspect the state of Workflows and Activities - Manage Namespaces, Schedules, and Task Queues - Monitor and debug application behavior ## Install the CLI The CLI is available for macOS, Linux, and Windows, or as a Docker image. Install with Homebrew: ```bash brew install temporal ``` Or download from the CDN: - [Darwin amd64](https://temporal.download/cli/archive/latest?platform=darwin&arch=amd64) - [Darwin arm64](https://temporal.download/cli/archive/latest?platform=darwin&arch=arm64) Extract the archive and add the `temporal` binary to your `PATH`. Install with Homebrew (if available): ```bash brew install temporal ``` Or download from the CDN: - [Linux amd64](https://temporal.download/cli/archive/latest?platform=linux&arch=amd64) - [Linux arm64](https://temporal.download/cli/archive/latest?platform=linux&arch=arm64) Extract the archive and add the `temporal` binary to your `PATH`. Download from the CDN: - [Windows amd64](https://temporal.download/cli/archive/latest?platform=windows&arch=amd64) - [Windows arm64](https://temporal.download/cli/archive/latest?platform=windows&arch=arm64) Extract the archive and add the `temporal.exe` binary to your `PATH`. Temporal CLI container image is available on [DockerHub](https://hub.docker.com/r/temporalio/temporal) and can be run directly: ```shell docker run --rm temporalio/temporal --help ``` ## Run the development server The CLI includes a local Temporal development service for fast feedback while building your application. Start the server: ```bash temporal server start-dev \ --db-filename path/to/local-persistent-store ``` View available options: ```bash temporal server start-dev \ --help ``` :::note When running the CLI inside Docker, for the development server to be accessible from the host system, the server needs to be configured to listen on external IP and the ports need to be forwarded: ```shell docker run --rm \ -p 7233:7233 -p 8233:8233 \ temporalio/temporal server start-dev \ --ip 0.0.0.0 --- # UI is now accessible from host at http://localhost:8233/ ``` ::: ### What the local server provides - A local instance of the Temporal Service - Automatic startup of the Web UI - A default Namespace - Optional persistence using SQLite Omitting `--db-filename` uses an in-memory database. This speeds up testing but does not persist Workflow data between sessions. ### Access the Web UI - Temporal Service: `localhost:7233` - Web UI: [http://localhost:8233](http://localhost:8233) :::tip The CLI works with all Temporal SDKs. Use it to develop and test your application before deploying to production. ::: ## Getting CLI help From the command line: ``` temporal --help ``` For example: - `temporal --help` - `temporal workflow --help` - `temporal workflow delete --help` Available commands | Command | Description | | ---------------------------------- | ----------------------------------------------------------- | | [**activity**](/cli/activity) | Complete, update, pause, unpause, reset or fail an Activity | | [**batch**](/cli/batch) | Manage running batch jobs | | [**completion**](/cli/cmd-options) | Generate the autocompletion script for the specified shell | | [**env**](/cli/env) | Manage environments | | [**operator**](/cli/operator) | Manage Temporal deployments | | [**schedule**](/cli/schedule) | Perform operations on Schedules | | [**server**](/cli/server) | Run Temporal Server | | [**task-queue**](/cli/task-queue) | Manage Task Queues | | [**worker**](/cli/worker) | Read or update Worker state | | [**workflow**](/cli/workflow) | Start, list, and operate on Workflows | --- ## Temporal CLI task-queue command reference {/* NOTE: This is an auto-generated file. Any edit to this file will be overwritten. This file is generated from https://github.com/temporalio/cli/blob/main/temporalcli/commandsgen/commands.yml */} ## config Manage Task Queue configuration: ``` temporal task-queue config [command] [options] ``` Available commands: - `get`: Retrieve the current configuration for a task queue - `set`: Update the configuration for a task queue ### get Retrieve the current configuration for a Task Queue: ``` temporal task-queue config get \ --task-queue YourTaskQueue \ --task-queue-type activity ``` This command returns the current configuration including: - Queue rate limit: The overall rate limit of the task queue. This setting overrides the worker rate limit if set. Unless modified, this is the system-defined rate limit. - Fairness key rate limit defaults: Default rate limits for fairness keys. If set, each individual fairness key will be limited to this rate, scaled by the weight of the fairness key. Use the following options to change the behavior of this command. **Flags:** **--task-queue**, **-t** _string_ Task Queue name. Required. **--task-queue-type** _string-enum_ Task Queue type. Accepted values: workflow, activity, nexus. Required. Accepted values: workflow, activity, nexus. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### set Update configuration settings for a Task Queue. ``` temporal task-queue config set \ --task-queue YourTaskQueue \ --task-queue-type activity \ --namespace YourNamespace \ --queue-rps-limit \ --queue-rps-limit-reason \ --fairness-key-rps-limit-default \ --fairness-key-rps-limit-reason ``` This command supports updating: - Queue rate limits: Controls the overall rate limit of the task queue. This setting overrides the worker rate limit if set. Unless modified, this is the system-defined rate limit. - Fairness key rate limit defaults: Sets default rate limits for fairness keys. If set, each individual fairness key will be limited to this rate, scaled by the weight of the fairness key. To unset a rate limit, pass in 'default', for example: --queue-rps-limit default Use the following options to change the behavior of this command. **Flags:** **--fairness-key-rps-limit-default** _string_ Fairness key rate limit default in requests per second. Accepts a float; or 'default' to unset. **--fairness-key-rps-limit-reason** _string_ Reason for fairness key rate limit update. **--queue-rps-limit** _string_ Queue rate limit in requests per second. Accepts a float; or 'default' to unset. **--queue-rps-limit-reason** _string_ Reason for queue rate limit update. **--task-queue**, **-t** _string_ Task Queue name. Required. **--task-queue-type** _string-enum_ Task Queue type. Accepted values: workflow, activity, nexus. Required. Accepted values: workflow, activity, nexus. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## describe Display a list of active Workers that have recently polled a Task Queue. The Temporal Server records each poll request time. A `LastAccessTime` over one minute may indicate the Worker is at capacity or has shut down. Temporal Workers are removed if 5 minutes have passed since the last poll request. ``` temporal task-queue describe \ --task-queue YourTaskQueue ``` This command provides poller information for a given Task Queue. Workflow and Activity polling use separate Task Queues: ``` temporal task-queue describe \ --task-queue YourTaskQueue \ --task-queue-type "activity" ``` This command provides the following task queue statistics: - `ApproximateBacklogCount`: The approximate number of tasks backlogged in this task queue. May count expired tasks but eventually converges to the right value. - `ApproximateBacklogAge`: Approximate age of the oldest task in the backlog, based on its creation time, measured in seconds. - `TasksAddRate`: Approximate rate at which tasks are being added to the task queue, measured in tasks per second, averaged over the last 30 seconds. Includes tasks dispatched immediately without going to the backlog (sync-matched tasks), as well as tasks added to the backlog. (See note below.) - `TasksDispatchRate`: Approximate rate at which tasks are being dispatched from the task queue, measured in tasks per second, averaged over the last 30 seconds. Includes tasks dispatched immediately without going to the backlog (sync-matched tasks), as well as tasks added to the backlog. (See note below.) - `BacklogIncreaseRate`: Approximate rate at which the backlog size is increasing (if positive) or decreasing (if negative), measured in tasks per second, averaged over the last 30 seconds. This is roughly equivalent to: `TasksAddRate` - `TasksDispatchRate`. NOTE: The `TasksAddRate` and `TasksDispatchRate` metrics may differ from the actual rate of add/dispatch, because tasks may be dispatched eagerly to an available worker, or may apply only to specific workers (they are "sticky"). Such tasks are not counted by these metrics. Despite the inaccuracy of these two metrics, the derived metric of `BacklogIncreaseRate` is accurate for backlogs older than a few seconds. Safely retire Workers assigned a Build ID by checking reachability across all task types. Use the flag `--report-reachability`: ``` temporal task-queue describe \ --task-queue YourTaskQueue \ --select-build-id "YourBuildId" \ --report-reachability ``` Task reachability information is returned for the requested versions and all task types, which can be used to safely retire Workers with old code versions, provided that they were assigned a Build ID. Note that task reachability status is experimental and may significantly change or be removed in a future release. Also, determining task reachability incurs a non-trivial computing cost. Task reachability states are reported per build ID. The state may be one of the following: - `Reachable`: using the current versioning rules, the Build ID may be used by new Workflow Executions or Activities OR there are currently open Workflow or backlogged Activity tasks assigned to the queue. - `ClosedWorkflowsOnly`: the Build ID does not have open Workflow Executions and can't be reached by new Workflow Executions. It MAY have closed Workflow Executions within the Namespace retention period. - `Unreachable`: this Build ID is not used for new Workflow Executions and isn't used by any existing Workflow Execution within the retention period. Task reachability is eventually consistent. You may experience a delay until reachability converges to the most accurate value. This is designed to act in the most conservative way until convergence. For example, `Reachable` is more conservative than `ClosedWorkflowsOnly`. Use the following options to change the behavior of this command. **Flags:** **--disable-stats** _bool_ Disable task queue statistics. **--legacy-mode** _bool_ Enable a legacy mode for servers that do not support rules-based worker versioning. This mode only provides pollers info. **--partitions-legacy** _int_ Query partitions 1 through `N`. Experimental/Temporary feature. Legacy mode only. (default "1") **--report-config** _bool_ Include task queue configuration in the response. When enabled, the command will return the current rate limit configuration for the task queue. **--report-reachability** _bool_ Display task reachability information. **--select-all-active** _bool_ Include all active versions. A version is active if it had new tasks or polls recently. **--select-build-id** _string[]_ Filter the Task Queue based on Build ID. **--select-unversioned** _bool_ Include the unversioned queue. **--task-queue**, **-t** _string_ Task Queue name. Required. **--task-queue-type** _string-enum[]_ Task Queue type. If not specified, all types are reported. Accepted values: workflow, activity, nexus. **--task-queue-type-legacy** _string-enum_ Task Queue type (legacy mode only). Accepted values: workflow, activity. (default "workflow") **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## get-build-id-reachability ``` +-----------------------------------------------------------------------------+ | CAUTION: This command is deprecated and will be removed in a later release. | +-----------------------------------------------------------------------------+ ``` Show if a given Build ID can be used for new, existing, or closed Workflows in Namespaces that support Worker versioning: ``` temporal task-queue get-build-id-reachability \ --task-queue YourTaskQueue \ --build-id "YourBuildId" ``` You can specify the `--build-id` and `--task-queue` flags multiple times. If `--task-queue` is omitted, the command checks Build ID reachability against all Task Queues. Use the following options to change the behavior of this command. **Flags:** **--build-id** _string[]_ One or more Build ID strings. Can be passed multiple times. **--reachability-type** _string-enum_ Reachability filter. `open`: reachable by one or more open workflows. `closed`: reachable by one or more closed workflows. `existing`: reachable by either. New Workflow Executions reachable by a Build ID are always reported. Accepted values: open, closed, existing. (default "existing") **--task-queue**, **-t** _string[]_ Search only the specified task queue(s). Can be passed multiple times. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## get-build-ids ``` +-----------------------------------------------------------------------------+ | CAUTION: This command is deprecated and will be removed in a later release. | +-----------------------------------------------------------------------------+ ``` Fetch sets of compatible Build IDs for specified Task Queues and display their information: ``` temporal task-queue get-build-ids \ --task-queue YourTaskQueue ``` This command is limited to Namespaces that support Worker versioning. Use the following options to change the behavior of this command. **Flags:** **--max-sets** _int_ Max return count. Use 1 for default major version. Use 0 for all sets. (default "0") **--task-queue**, **-t** _string_ Task Queue name. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## list-partition Display a Task Queue's partition list with assigned matching nodes: ``` temporal task-queue list-partition \ --task-queue YourTaskQueue ``` Use the following options to change the behavior of this command. **Flags:** **--task-queue**, **-t** _string_ Task Queue name. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## update-build-ids ``` +-----------------------------------------------------------------------------+ | CAUTION: This command is deprecated and will be removed in a later release. | +-----------------------------------------------------------------------------+ ``` Add or change a Task Queue's compatible Build IDs for Namespaces using Worker versioning: ``` temporal task-queue update-build-ids [subcommands] [options] \ --task-queue YourTaskQueue ``` ### add-new-compatible Add a compatible Build ID to a Task Queue's existing version set. Provide an existing Build ID and a new Build ID: ``` temporal task-queue update-build-ids add-new-compatible \ --task-queue YourTaskQueue \ --existing-compatible-build-id "YourExistingBuildId" \ --build-id "YourNewBuildId" ``` The new ID is stored in the set containing the existing ID and becomes the new default for that set. This command is limited to Namespaces that support Worker versioning. Use the following options to change the behavior of this command. **Flags:** **--build-id** _string_ Build ID to be added. Required. **--existing-compatible-build-id** _string_ Pre-existing Build ID in this Task Queue. Required. **--set-as-default** _bool_ Set the expanded Build ID set as the Task Queue default. **--task-queue**, **-t** _string_ Task Queue name. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### add-new-default ``` +-----------------------------------------------------------------------------+ | CAUTION: This command is deprecated and will be removed in a later release. | +-----------------------------------------------------------------------------+ ``` Create a new Task Queue Build ID set, add a Build ID to it, and make it the overall Task Queue default. The new set will be incompatible with previous sets and versions. ``` temporal task-queue update-build-ids add-new-default \ --task-queue YourTaskQueue \ --build-id "YourNewBuildId" ``` ``` +------------------------------------------------------------------------+ | NOTICE: This command is limited to Namespaces that support Worker | | versioning. Worker versioning is experimental. Versioning commands are | | subject to change. | +------------------------------------------------------------------------+ ``` Use the following options to change the behavior of this command. **Flags:** **--build-id** _string_ Build ID to be added. Required. **--task-queue**, **-t** _string_ Task Queue name. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### promote-id-in-set ``` +-----------------------------------------------------------------------------+ | CAUTION: This command is deprecated and will be removed in a later release. | +-----------------------------------------------------------------------------+ ``` Establish an existing Build ID as the default in its Task Queue set. New tasks compatible with this set will now be dispatched to this ID: ``` temporal task-queue update-build-ids promote-id-in-set \ --task-queue YourTaskQueue \ --build-id "YourBuildId" ``` ``` +------------------------------------------------------------------------+ | NOTICE: This command is limited to Namespaces that support Worker | | versioning. Worker versioning is experimental. Versioning commands are | | subject to change. | +------------------------------------------------------------------------+ ``` Use the following options to change the behavior of this command. **Flags:** **--build-id** _string_ Build ID to set as default. Required. **--task-queue**, **-t** _string_ Task Queue name. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### promote-set ``` +-----------------------------------------------------------------------------+ | CAUTION: This command is deprecated and will be removed in a later release. | +-----------------------------------------------------------------------------+ ``` Promote a Build ID set to be the default on a Task Queue. Identify the set by providing a Build ID within it. If the set is already the default, this command has no effect: ``` temporal task-queue update-build-ids promote-set \ --task-queue YourTaskQueue \ --build-id "YourBuildId" ``` ``` +------------------------------------------------------------------------+ | NOTICE: This command is limited to Namespaces that support Worker | | versioning. Worker versioning is experimental. Versioning commands are | | subject to change. | +------------------------------------------------------------------------+ ``` Use the following options to change the behavior of this command. **Flags:** **--build-id** _string_ Build ID within the promoted set. Required. **--task-queue**, **-t** _string_ Task Queue name. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## versioning ``` +---------------------------------------------------------------------+ | CAUTION: This API has been deprecated by Worker Deployment. | +---------------------------------------------------------------------+ ``` Provides commands to add, list, remove, or replace Worker Build ID assignment and redirect rules associated with Task Queues: ``` temporal task-queue versioning [subcommands] [options] \ --task-queue YourTaskQueue ``` Task Queues support the following versioning rules and policies: - Assignment Rules: manage how new executions are assigned to run on specific Worker Build IDs. Each Task Queue stores a list of ordered Assignment Rules, which are evaluated from first to last. Assignment Rules also allow for gradual rollout of new Build IDs by setting ramp percentage. - Redirect Rules: automatically assign work for a source Build ID to a target Build ID. You may add at most one redirect rule for each source Build ID. Redirect rules require that a target Build ID is fully compatible with the source Build ID. ### add-redirect-rule Add a new redirect rule for a given Task Queue. You may add at most one redirect rule for each distinct source build ID: ``` temporal task-queue versioning add-redirect-rule \ --task-queue YourTaskQueue \ --source-build-id "YourSourceBuildID" \ --target-build-id "YourTargetBuildID" ``` ``` +---------------------------------------------------------------------+ | CAUTION: This API has been deprecated by Worker Deployment. | +---------------------------------------------------------------------+ ``` Use the following options to change the behavior of this command. **Flags:** **--source-build-id** _string_ Source build ID. Required. **--target-build-id** _string_ Target build ID. Required. **--yes**, **-y** _bool_ Don't prompt to confirm. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### commit-build-id Complete a Build ID's rollout and clean up unnecessary rules that might have been created during a gradual rollout: ``` temporal task-queue versioning commit-build-id \ --task-queue YourTaskQueue --build-id "YourBuildId" ``` This command automatically applies the following atomic changes: - Adds an unconditional assignment rule for the target Build ID at the end of the list. - Removes all previously added assignment rules to the given target Build ID. - Removes any unconditional assignment rules for other Build IDs. Rejects requests when there have been no recent pollers for this Build ID. This prevents committing invalid Build IDs. Use the `--force` option to override this validation. ``` +---------------------------------------------------------------------+ | CAUTION: This API has been deprecated by Worker Deployment. | +---------------------------------------------------------------------+ ``` Use the following options to change the behavior of this command. **Flags:** **--build-id** _string_ Target build ID. Required. **--force** _bool_ Bypass recent-poller validation. **--yes**, **-y** _bool_ Don't prompt to confirm. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### delete-assignment-rule Deletes a rule identified by its index in the Task Queue's list of assignment rules. ``` temporal task-queue versioning delete-assignment-rule \ --task-queue YourTaskQueue \ --rule-index YourIntegerRuleIndex ``` By default, the Task Queue must retain one unconditional rule, such as "no hint filter" or "percentage". Otherwise, the delete operation is rejected. Use the `--force` option to override this validation. ``` +---------------------------------------------------------------------+ | CAUTION: This API has been deprecated by Worker Deployment. | +---------------------------------------------------------------------+ ``` Use the following options to change the behavior of this command. **Flags:** **--force** _bool_ Bypass one-unconditional-rule validation. **--rule-index**, **-i** _int_ Position of the assignment rule to be replaced. Requests for invalid indices will fail. Required. **--yes**, **-y** _bool_ Don't prompt to confirm. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### delete-redirect-rule Deletes the routing rule for the given source Build ID. ``` temporal task-queue versioning delete-redirect-rule \ --task-queue YourTaskQueue \ --source-build-id "YourBuildId" ``` ``` +---------------------------------------------------------------------+ | CAUTION: This API has been deprecated by Worker Deployment. | +---------------------------------------------------------------------+ ``` Use the following options to change the behavior of this command. **Flags:** **--source-build-id** _string_ Source Build ID. Required. **--yes**, **-y** _bool_ Don't prompt to confirm. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### get-rules Retrieve all the Worker Build ID assignments and redirect rules associated with a Task Queue: ``` temporal task-queue versioning get-rules \ --task-queue YourTaskQueue ``` Task Queues support the following versioning rules: - Assignment Rules: manage how new executions are assigned to run on specific Worker Build IDs. Each Task Queue stores a list of ordered Assignment Rules, which are evaluated from first to last. Assignment Rules also allow for gradual rollout of new Build IDs by setting ramp percentage. - Redirect Rules: automatically assign work for a source Build ID to a target Build ID. You may add at most one redirect rule for each source Build ID. Redirect rules require that a target Build ID is fully compatible with the source Build ID. ``` +---------------------------------------------------------------------+ | CAUTION: This API has been deprecated by Worker Deployment. | +---------------------------------------------------------------------+ ``` Use the following options to change the behavior of this command. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### insert-assignment-rule Inserts a new assignment rule for this Task Queue. Rules are evaluated in order, starting from index 0. The first applicable rule is applied, and the rest ignored: ``` temporal task-queue versioning insert-assignment-rule \ --task-queue YourTaskQueue \ --build-id "YourBuildId" ``` If you do not specify a `--rule-index`, this command inserts at index 0. ``` +---------------------------------------------------------------------+ | CAUTION: This API has been deprecated by Worker Deployment. | +---------------------------------------------------------------------+ ``` Use the following options to change the behavior of this command. **Flags:** **--build-id** _string_ Target Build ID. Required. **--percentage** _int_ Traffic percent to send to target Build ID. (default "100") **--rule-index**, **-i** _int_ Insertion position. Ranges from 0 (insert at start) to count (append). Any number greater than the count is treated as "append". (default "0") **--yes**, **-y** _bool_ Don't prompt to confirm. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### replace-assignment-rule Change an assignment rule for this Task Queue. By default, this enforces one unconditional rule (no hint filter or percentage). Otherwise, the operation will be rejected. Set `force` to true to bypass this validation. ``` temporal task-queue versioning replace-assignment-rule \ --task-queue YourTaskQueue \ --rule-index AnIntegerIndex \ --build-id "YourBuildId" ``` To assign multiple assignment rules to a single Build ID, use 'insert-assignment-rule'. To update the percent: ``` temporal task-queue versioning replace-assignment-rule \ --task-queue YourTaskQueue \ --rule-index AnIntegerIndex \ --build-id "YourBuildId" \ --percentage AnIntegerPercent ``` Percent may vary between 0 and 100 (default). ``` +---------------------------------------------------------------------+ | CAUTION: This API has been deprecated by Worker Deployment. | +---------------------------------------------------------------------+ ``` Use the following options to change the behavior of this command. **Flags:** **--build-id** _string_ Target Build ID. Required. **--force** _bool_ Bypass the validation that one unconditional rule remains. **--percentage** _int_ Divert percent of traffic to target Build ID. (default "100") **--rule-index**, **-i** _int_ Position of the assignment rule to be replaced. Requests for invalid indices will fail. Required. **--yes**, **-y** _bool_ Don't prompt to confirm. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### replace-redirect-rule Updates a Build ID's redirect rule on a Task Queue by replacing its target Build ID: ``` temporal task-queue versioning replace-redirect-rule \ --task-queue YourTaskQueue \ --source-build-id YourSourceBuildId \ --target-build-id YourNewTargetBuildId ``` ``` +---------------------------------------------------------------------+ | CAUTION: This API has been deprecated by Worker Deployment. | +---------------------------------------------------------------------+ ``` Use the following options to change the behavior of this command. **Flags:** **--source-build-id** _string_ Source Build ID. Required. **--target-build-id** _string_ Target Build ID. Required. **--yes**, **-y** _bool_ Don't prompt to confirm. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. --- ## Temporal CLI worker command reference {/* NOTE: This is an auto-generated file. Any edit to this file will be overwritten. This file is generated from https://github.com/temporalio/cli/blob/main/temporalcli/commandsgen/commands.yml */} ## deployment ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` Deployment commands perform operations on Worker Deployments: ``` temporal worker deployment [command] [options] ``` For example: ``` temporal worker deployment list ``` Lists the Deployments in the client's namespace. Arguments can be Worker Deployment Versions associated with a Deployment, specified using the Deployment name and Build ID. For example: ``` temporal worker deployment set-current-version \ --deployment-name YourDeploymentName --build-id YourBuildID ``` Sets the current Deployment Version for a given Deployment. ### delete ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` Remove a Worker Deployment given its Deployment Name. A Deployment can only be deleted if it has no Version in it. ``` temporal worker deployment delete [options] ``` For example, setting the user identity that removed the deployment: ``` temporal worker deployment delete \ --name YourDeploymentName \ --identity YourIdentity ``` Use the following options to change the behavior of this command. **Flags:** **--name**, **-d** _string_ Name for a Worker Deployment. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### delete-version ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` Remove a Worker Deployment Version given its fully-qualified identifier. This is rarely needed during normal operation since unused Versions are eventually garbage collected. The client can delete a Version only when all of the following conditions are met: - It is not the Current or Ramping Version for this Deployment. - It has no active pollers, i.e., none of the task queues in the Version have pollers. - It is not draining. This requirement can be ignored with the option `--skip-drainage`. ``` temporal worker deployment delete-version [options] ``` For example, skipping the drainage restriction: ``` temporal worker deployment delete-version \ --deployment-name YourDeploymentName --build-id YourBuildID \ --skip-drainage ``` Use the following options to change the behavior of this command. **Flags:** **--build-id** _string_ Build ID of the Worker Deployment Version. Required. **--deployment-name** _string_ Name of the Worker Deployment. Required. **--skip-drainage** _bool_ Ignore the deletion requirement of not draining. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### describe ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` Describe properties of a Worker Deployment, such as the versions associated with it, routing information of new or existing tasks executed by this deployment, or its creation time. ``` temporal worker deployment describe [options] ``` For example, to describe a deployment `YourDeploymentName` in the default namespace: ``` temporal worker deployment describe \ --name YourDeploymentName ``` Use the following options to change the behavior of this command. **Flags:** **--name**, **-d** _string_ Name for a Worker Deployment. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### describe-version ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` Describe properties of a Worker Deployment Version, such as the task queues polled by workers in this Deployment Version, or drainage information required to safely decommission workers, or user-provided metadata, or its creation/modification time. ``` temporal worker deployment describe-version [options] ``` For example, to describe a deployment version in a deployment `YourDeploymentName`, with Build ID `YourBuildID`, and in the default namespace: ``` temporal worker deployment describe-version \ --deployment-name YourDeploymentName --build-id YourBuildID ``` Use the following options to change the behavior of this command. **Flags:** **--build-id** _string_ Build ID of the Worker Deployment Version. Required. **--deployment-name** _string_ Name of the Worker Deployment. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### list ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` List existing Worker Deployments in the client's namespace. ``` temporal worker deployment list [options] ``` For example, listing Deployments in YourDeploymentNamespace: ``` temporal worker deployment list \ --namespace YourDeploymentNamespace ``` Use the following options to change the behavior of this command. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### manager-identity ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` Manager Identity commands change the `ManagerIdentity` of a Worker Deployment: ``` temporal worker deployment manager-identity [command] [options] ``` When present, `ManagerIdentity` is the identity of the user that has the exclusive right to make changes to this Worker Deployment. Empty by default. When set, users whose identity does not match the `ManagerIdentity` will not be able to change the Worker Deployment. This is especially useful in environments where multiple users (such as CLI users and automated controllers) may interact with the same Worker Deployment. `ManagerIdentity` allows different users to communicate with one another about who is expected to make changes to the Worker Deployment. The current Manager Identity is returned with `describe`: ``` temporal worker deployment describe \ --deployment-name YourDeploymentName ``` #### set ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` Set the `ManagerIdentity` of a Worker Deployment given its Deployment Name. When present, `ManagerIdentity` is the identity of the user that has the exclusive right to make changes to this Worker Deployment. Empty by default. When set, users whose identity does not match the `ManagerIdentity` will not be able to change the Worker Deployment. This is especially useful in environments where multiple users (such as CLI users and automated controllers) may interact with the same Worker Deployment. `ManagerIdentity` allows different users to communicate with one another about who is expected to make changes to the Worker Deployment. ``` temporal worker deployment manager-identity set [options] ``` For example: ``` temporal worker deployment manager-identity set \ --deployment-name DeploymentName \ --self \ --identity YourUserIdentity # optional, populated by CLI if not provided ``` Sets the Manager Identity of the Deployment to the identity of the user making this request. If you don't specifically pass an identity field, the CLI will generate your identity for you. For example: ``` temporal worker deployment manager-identity set \ --deployment-name DeploymentName \ --manager-identity NewManagerIdentity ``` Sets the Manager Identity of the Deployment to any string. Use the following options to change the behavior of this command. **Flags:** **--deployment-name** _string_ Name for a Worker Deployment. Required. **--manager-identity** _string_ New Manager Identity. Required unless --self is specified. **--self** _bool_ Set Manager Identity to the identity of the user submitting this request. Required unless --manager-identity is specified. **--yes**, **-y** _bool_ Don't prompt to confirm set Manager Identity. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. #### unset ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` Unset the `ManagerIdentity` of a Worker Deployment given its Deployment Name. When present, `ManagerIdentity` is the identity of the user that has the exclusive right to make changes to this Worker Deployment. Empty by default. When set, users whose identity does not match the `ManagerIdentity` will not be able to change the Worker Deployment. This is especially useful in environments where multiple users (such as CLI users and automated controllers) may interact with the same Worker Deployment. `ManagerIdentity` allows different users to communicate with one another about who is expected to make changes to the Worker Deployment. ``` temporal worker deployment manager-identity unset [options] ``` For example: ``` temporal worker deployment manager-identity unset \ --deployment-name YourDeploymentName ``` Clears the Manager Identity field for a given Deployment. Use the following options to change the behavior of this command. **Flags:** **--deployment-name** _string_ Name for a Worker Deployment. Required. **--yes**, **-y** _bool_ Don't prompt to confirm unset Manager Identity. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### set-current-version ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` Set the Current Version for a Deployment. When a Version is current, Workers of that Deployment Version will receive tasks from new Workflows, and from existing AutoUpgrade Workflows that are running on this Deployment. If not all the expected Task Queues are being polled by Workers in the new Version the request will fail. To override this protection use `--ignore-missing-task-queues`. Note that this would ignore task queues in a deployment that are not yet discovered, leading to inconsistent task queue configuration. ``` temporal worker deployment set-current-version [options] ``` For example, to set the Current Version of a deployment `YourDeploymentName`, with a version with Build ID `YourBuildID`, and in the default namespace: ``` temporal worker deployment set-current-version \ --deployment-name YourDeploymentName --build-id YourBuildID ``` The target of set-current-version can also be unversioned workers: ``` temporal worker deployment set-current-version \ --deployment-name YourDeploymentName --unversioned ``` Use the following options to change the behavior of this command. **Flags:** **--allow-no-pollers** _bool_ Override protection and set version as current even if it has no pollers. **--build-id** _string_ Build ID of the Worker Deployment Version. Required unless --unversioned is specified. **--deployment-name** _string_ Name of the Worker Deployment. Required. **--ignore-missing-task-queues** _bool_ Override protection to accidentally remove task queues. **--unversioned** _bool_ Set unversioned workers as the target version. Cannot be used with --build-id. **--yes**, **-y** _bool_ Don't prompt to confirm set Current Version. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### set-ramping-version ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` Set the Ramping Version and Percentage for a Deployment. The Ramping Version can be set using deployment name and build ID, or set to unversioned workers using the --unversioned flag. The Ramping Percentage is a float with values in the range [0, 100]. A value of 100 does not make the Ramping Version Current, use `set-current-version` instead. To remove a Ramping Version use the flag `--delete`. If not all the expected Task Queues are being polled by Workers in the new Ramping Version the request will fail. To override this protection use `--ignore-missing-task-queues`. Note that this would ignore task queues in a deployment that are not yet discovered, leading to inconsistent task queue configuration. ``` temporal worker deployment set-ramping-version [options] ``` For example, to set the Ramping Version of a deployment `YourDeploymentName`, with a version with Build ID `YourBuildID`, with 10 percent of tasks redirected to this version, and using the default namespace: ``` temporal worker deployment set-ramping-version \ --deployment-name YourDeploymentName --build-id YourBuildID \ --percentage 10.0 ``` And to remove that ramping: ``` temporal worker deployment set-ramping-version \ --deployment-name YourDeploymentName --build-id YourBuildID \ --delete ``` Use the following options to change the behavior of this command. **Flags:** **--allow-no-pollers** _bool_ Override protection and set version as ramping even if it has no pollers. **--build-id** _string_ Build ID of the Worker Deployment Version. Required unless --unversioned is specified. **--delete** _bool_ Delete the Ramping Version. **--deployment-name** _string_ Name of the Worker Deployment. Required. **--ignore-missing-task-queues** _bool_ Override protection to accidentally remove task queues. **--percentage** _float_ Percentage of tasks redirected to the Ramping Version. Valid range [0,100]. **--unversioned** _bool_ Set unversioned workers as the target version. Cannot be used with --build-id. **--yes**, **-y** _bool_ Don't prompt to confirm set Ramping Version. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### update-metadata-version ``` +---------------------------------------------------------------------+ | CAUTION: Worker Deployment is experimental. Deployment commands are | | subject to change. | +---------------------------------------------------------------------+ ``` Update metadata associated with a Worker Deployment Version. For example: ``` temporal worker deployment update-metadata-version \ --deployment-name YourDeploymentName --build-id YourBuildID \ --metadata bar=1 \ --metadata foo=true ``` The current metadata is also returned with `describe-version`: ``` temporal worker deployment describe-version \ --deployment-name YourDeploymentName --build-id YourBuildID \ ``` Use the following options to change the behavior of this command. **Flags:** **--build-id** _string_ Build ID of the Worker Deployment Version. Required. **--deployment-name** _string_ Name of the Worker Deployment. Required. **--metadata** _string[]_ Set deployment metadata using `KEY="VALUE"` pairs. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--remove-entries** _string[]_ Keys of entries to be deleted from metadata. Can be passed multiple times. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## describe Look up information of a specific worker. ``` temporal worker describe --namespace YourNamespace --worker-instance-key YourKey ``` Use the following options to change the behavior of this command. **Flags:** **--worker-instance-key** _string_ Worker instance key to describe. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## list Get a list of workers to the specified namespace. ``` temporal worker list --namespace YourNamespace --query 'taskQueue="YourTaskQueue"' ``` Use the following options to change the behavior of this command. **Flags:** **--limit** _int_ Maximum number of workers to display. **--query**, **-q** _string_ Content for an SQL-like `QUERY` List Filter. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. --- ## Temporal CLI workflow command reference {/* NOTE: This is an auto-generated file. Any edit to this file will be overwritten. This file is generated from https://github.com/temporalio/cli/blob/main/temporalcli/commandsgen/commands.yml */} ## cancel Canceling a running Workflow Execution records a `WorkflowExecutionCancelRequested` event in the Event History. The Service schedules a new Command Task, and the Workflow Execution performs any cleanup work supported by its implementation. Use the Workflow ID to cancel an Execution: ``` temporal workflow cancel \ --workflow-id YourWorkflowId ``` A visibility Query lets you send bulk cancellations to Workflow Executions matching the results: ``` temporal workflow cancel \ --query YourQuery ``` Visit https://docs.temporal.io/visibility to read more about Search Attributes and Query creation. See `temporal batch --help` for a quick reference. Use the following options to change the behavior of this command. **Flags:** **--query**, **-q** _string_ Content for an SQL-like `QUERY` List Filter. You must set either --workflow-id or --query. **--reason** _string_ Reason for batch operation. Only use with --query. Defaults to user name. **--rps** _float_ Limit batch's requests per second. Only allowed if query is present. **--run-id**, **-r** _string_ Run ID. Only use with --workflow-id. Cannot use with --query. **--workflow-id**, **-w** _string_ Workflow ID. You must set either --workflow-id or --query. **--yes**, **-y** _bool_ Don't prompt to confirm signaling. Only allowed when --query is present. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## count Show a count of Workflow Executions, regardless of execution state (running, terminated, etc). Use `--query` to select a subset of Workflow Executions: ``` temporal workflow count \ --query YourQuery ``` Visit https://docs.temporal.io/visibility to read more about Search Attributes and Query creation. See `temporal batch --help` for a quick reference. Use the following options to change the behavior of this command. **Flags:** **--query**, **-q** _string_ Content for an SQL-like `QUERY` List Filter. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## delete Delete a Workflow Executions and its Event History: ``` temporal workflow delete \ --workflow-id YourWorkflowId ``` The removal executes asynchronously. If the Execution is Running, the Service terminates it before deletion. Visit https://docs.temporal.io/visibility to read more about Search Attributes and Query creation. See `temporal batch --help` for a quick reference. Use the following options to change the behavior of this command. **Flags:** **--query**, **-q** _string_ Content for an SQL-like `QUERY` List Filter. You must set either --workflow-id or --query. **--reason** _string_ Reason for batch operation. Only use with --query. Defaults to user name. **--rps** _float_ Limit batch's requests per second. Only allowed if query is present. **--run-id**, **-r** _string_ Run ID. Only use with --workflow-id. Cannot use with --query. **--workflow-id**, **-w** _string_ Workflow ID. You must set either --workflow-id or --query. **--yes**, **-y** _bool_ Don't prompt to confirm signaling. Only allowed when --query is present. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## describe Display information about a specific Workflow Execution: ``` temporal workflow describe \ --workflow-id YourWorkflowId ``` Show the Workflow Execution's auto-reset points: ``` temporal workflow describe \ --workflow-id YourWorkflowId \ --reset-points true ``` Use the following options to change the behavior of this command. **Flags:** **--raw** _bool_ Print properties without changing their format. **--reset-points** _bool_ Show auto-reset points only. **--run-id**, **-r** _string_ Run ID. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## execute Establish a new Workflow Execution and direct its progress to stdout. The command blocks and returns when the Workflow Execution completes. If your Workflow requires input, pass valid JSON: ``` temporal workflow execute --workflow-id YourWorkflowId \ --type YourWorkflow \ --task-queue YourTaskQueue \ --input '{"some-key": "some-value"}' ``` Use `--event-details` to relay updates to the command-line output in JSON format. When using JSON output (`--output json`), this includes the entire "history" JSON key for the run. Use the following options to change the behavior of this command. **Flags:** **--cron** _string_ Cron schedule for the Workflow. **--detailed** _bool_ Display events as sections instead of table. Does not apply to JSON output. **--execution-timeout** _duration_ Fail a WorkflowExecution if it lasts longer than `DURATION`. This time-out includes retries and ContinueAsNew tasks. **--fail-existing** _bool_ Fail if the Workflow already exists. **--fairness-key** _string_ Fairness key (max 64 bytes) for proportional task dispatch. Tasks with same key share capacity based on their weight. **--fairness-weight** _float_ Weight [0.001-1000] for this fairness key. Keys are dispatched proportionally to their weights. **--id-conflict-policy** _string-enum_ Determines how to resolve a conflict when spawning a new Workflow Execution with a particular Workflow Id used by an existing Open Workflow Execution. Accepted values: Fail, UseExisting, TerminateExisting. **--id-reuse-policy** _string-enum_ Re-use policy for the Workflow ID in new Workflow Executions. Accepted values: AllowDuplicate, AllowDuplicateFailedOnly, RejectDuplicate, TerminateIfRunning. **--input**, **-i** _string[]_ Input value. Use JSON content or set --input-meta to override. Can't be combined with --input-file. Can be passed multiple times to pass multiple arguments. **--input-base64** _bool_ Assume inputs are base64-encoded and attempt to decode them. **--input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --input-meta to override. Can't be combined with --input. Can be passed multiple times to pass multiple arguments. **--input-meta** _string[]_ Input payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. Repeated metadata keys are applied to the corresponding inputs in the provided order. **--memo** _string[]_ Memo using 'KEY="VALUE"' pairs. Use JSON values. **--priority-key** _int_ Priority key (1-5, lower numbers = higher priority). Tasks in a queue should be processed in close-to-priority-order. Default is 3 when not specified. **--run-timeout** _duration_ Fail a Workflow Run if it lasts longer than `DURATION`. **--search-attribute** _string[]_ Search Attribute in `KEY=VALUE` format. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--start-delay** _duration_ Delay before starting the Workflow Execution. Can't be used with cron schedules. If the Workflow receives a signal or update prior to this time, the Workflow Execution starts immediately. **--static-details** _string_ Static Workflow details for human consumption in UIs. Uses Temporal Markdown formatting, may be multiple lines. :::note Option is experimental. ::: **--static-summary** _string_ Static Workflow summary for human consumption in UIs. Uses Temporal Markdown formatting, should be a single line. :::note Option is experimental. ::: **--task-queue**, **-t** _string_ Workflow Task queue. Required. **--task-timeout** _duration_ Fail a Workflow Task if it lasts longer than `DURATION`. This is the Start-to-close timeout for a Workflow Task. (default "10s") **--type** _string_ Workflow Type name. Required. **--workflow-id**, **-w** _string_ Workflow ID. If not supplied, the Service generates a unique ID. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## execute-update-with-start Send a message to a Workflow Execution to invoke an Update handler, and wait for the update to complete. If the Workflow Execution is not running, then a new workflow execution is started and the update is sent. Experimental. ``` temporal workflow execute-update-with-start \ --update-name YourUpdate \ --update-input '{"update-key": "update-value"}' \ --workflow-id YourWorkflowId \ --type YourWorkflowType \ --task-queue YourTaskQueue \ --id-conflict-policy Fail \ --input '{"wf-key": "wf-value"}' ``` Use the following options to change the behavior of this command. **Flags:** **--cron** _string_ Cron schedule for the Workflow. **--execution-timeout** _duration_ Fail a WorkflowExecution if it lasts longer than `DURATION`. This time-out includes retries and ContinueAsNew tasks. **--fail-existing** _bool_ Fail if the Workflow already exists. **--fairness-key** _string_ Fairness key (max 64 bytes) for proportional task dispatch. Tasks with same key share capacity based on their weight. **--fairness-weight** _float_ Weight [0.001-1000] for this fairness key. Keys are dispatched proportionally to their weights. **--id-conflict-policy** _string-enum_ Determines how to resolve a conflict when spawning a new Workflow Execution with a particular Workflow Id used by an existing Open Workflow Execution. Accepted values: Fail, UseExisting, TerminateExisting. **--id-reuse-policy** _string-enum_ Re-use policy for the Workflow ID in new Workflow Executions. Accepted values: AllowDuplicate, AllowDuplicateFailedOnly, RejectDuplicate, TerminateIfRunning. **--input**, **-i** _string[]_ Input value. Use JSON content or set --input-meta to override. Can't be combined with --input-file. Can be passed multiple times to pass multiple arguments. **--input-base64** _bool_ Assume inputs are base64-encoded and attempt to decode them. **--input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --input-meta to override. Can't be combined with --input. Can be passed multiple times to pass multiple arguments. **--input-meta** _string[]_ Input payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. Repeated metadata keys are applied to the corresponding inputs in the provided order. **--memo** _string[]_ Memo using 'KEY="VALUE"' pairs. Use JSON values. **--priority-key** _int_ Priority key (1-5, lower numbers = higher priority). Tasks in a queue should be processed in close-to-priority-order. Default is 3 when not specified. **--run-id**, **-r** _string_ Run ID. If unset, looks for an Update against the currently-running Workflow Execution. **--run-timeout** _duration_ Fail a Workflow Run if it lasts longer than `DURATION`. **--search-attribute** _string[]_ Search Attribute in `KEY=VALUE` format. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--start-delay** _duration_ Delay before starting the Workflow Execution. Can't be used with cron schedules. If the Workflow receives a signal or update prior to this time, the Workflow Execution starts immediately. **--static-details** _string_ Static Workflow details for human consumption in UIs. Uses Temporal Markdown formatting, may be multiple lines. :::note Option is experimental. ::: **--static-summary** _string_ Static Workflow summary for human consumption in UIs. Uses Temporal Markdown formatting, should be a single line. :::note Option is experimental. ::: **--task-queue**, **-t** _string_ Workflow Task queue. Required. **--task-timeout** _duration_ Fail a Workflow Task if it lasts longer than `DURATION`. This is the Start-to-close timeout for a Workflow Task. (default "10s") **--type** _string_ Workflow Type name. Required. **--update-first-execution-run-id** _string_ Parent Run ID. The update is sent to the last Workflow Execution in the chain started with this Run ID. **--update-id** _string_ Update ID. If unset, defaults to a UUID. **--update-input** _string[]_ Update input value. Use JSON content or set --update-input-meta to override. Can't be combined with --update-input-file. Can be passed multiple times to pass multiple arguments. **--update-input-base64** _bool_ Assume update inputs are base64-encoded and attempt to decode them. **--update-input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --update-input-meta to override. Can't be combined with --update-input. Can be passed multiple times to pass multiple arguments. **--update-input-meta** _string[]_ Input update payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. **--update-name** _string_ Update name. Required. **--workflow-id**, **-w** _string_ Workflow ID. If not supplied, the Service generates a unique ID. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## fix-history-json Reserialize an Event History JSON file: ``` temporal workflow fix-history-json \ --source /path/to/original.json \ --target /path/to/reserialized.json ``` Use the following options to change the behavior of this command. **Flags:** **--source**, **-s** _string_ Path to the original file. Required. **--target**, **-t** _string_ Path to the results file. When omitted, output is sent to stdout. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## list List Workflow Executions. The optional `--query` limits the output to Workflows matching a Query: ``` temporal workflow list \ --query YourQuery` ``` Visit https://docs.temporal.io/visibility to read more about Search Attributes and Query creation. See `temporal batch --help` for a quick reference. View a list of archived Workflow Executions: ``` temporal workflow list \ --archived ``` Use the following options to change the behavior of this command. **Flags:** **--archived** _bool_ Limit output to archived Workflow Executions. :::note Option is experimental. ::: **--limit** _int_ Maximum number of Workflow Executions to display. **--page-size** _int_ Maximum number of Workflow Executions to fetch at a time from the server. **--query**, **-q** _string_ Content for an SQL-like `QUERY` List Filter. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## metadata Issue a Query for and display user-set metadata like summary and details for a specific Workflow Execution: ``` temporal workflow metadata \ --workflow-id YourWorkflowId ``` Use the following options to change the behavior of this command. **Flags:** **--reject-condition** _string-enum_ Optional flag for rejecting Queries based on Workflow state. Accepted values: not_open, not_completed_cleanly. **--run-id**, **-r** _string_ Run ID. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## query Send a Query to a Workflow Execution by Workflow ID to retrieve its state. This synchronous operation exposes the internal state of a running Workflow Execution, which constantly changes. You can query both running and completed Workflow Executions: ``` temporal workflow query \ --workflow-id YourWorkflowId --type YourQueryType --input '{"YourInputKey": "YourInputValue"}' ``` Use the following options to change the behavior of this command. **Flags:** **--input**, **-i** _string[]_ Input value. Use JSON content or set --input-meta to override. Can't be combined with --input-file. Can be passed multiple times to pass multiple arguments. **--input-base64** _bool_ Assume inputs are base64-encoded and attempt to decode them. **--input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --input-meta to override. Can't be combined with --input. Can be passed multiple times to pass multiple arguments. **--input-meta** _string[]_ Input payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. Repeated metadata keys are applied to the corresponding inputs in the provided order. **--name** _string_ Query Type/Name. Required. **--reject-condition** _string-enum_ Optional flag for rejecting Queries based on Workflow state. Accepted values: not_open, not_completed_cleanly. **--run-id**, **-r** _string_ Run ID. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## reset Reset a Workflow Execution so it can resume from a point in its Event History without losing its progress up to that point: ``` temporal workflow reset \ --workflow-id YourWorkflowId \ --event-id YourLastEvent ``` Start from where the Workflow Execution last continued as new: ``` temporal workflow reset \ --workflow-id YourWorkflowId \ --type LastContinuedAsNew ``` For batch resets, limit your resets to FirstWorkflowTask, LastWorkflowTask, or BuildId. Do not use Workflow IDs, run IDs, or event IDs with this command. Visit https://docs.temporal.io/visibility to read more about Search Attributes and Query creation. ### with-workflow-update-options Run Workflow Update Options atomically after the Workflow is reset. Workflows selected by the reset command are forwarded onto the subcommand. Use the following options to change the behavior of this command. **Flags:** **--versioning-override-behavior** _string-enum_ Override the versioning behavior of a Workflow. Required. Accepted values: pinned, auto_upgrade. **--versioning-override-build-id** _string_ When overriding to a `pinned` behavior, specifies the Build ID of the version to target. **--versioning-override-deployment-name** _string_ When overriding to a `pinned` behavior, specifies the Deployment Name of the version to target. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## result Wait for and print the result of a Workflow Execution: ``` temporal workflow result \ --workflow-id YourWorkflowId ``` Use the following options to change the behavior of this command. **Flags:** **--run-id**, **-r** _string_ Run ID. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## show Show a Workflow Execution's Event History. When using JSON output (`--output json`), you may pass the results to an SDK to perform a replay: ``` temporal workflow show \ --workflow-id YourWorkflowId --output json ``` Use the following options to change the behavior of this command. **Flags:** **--detailed** _bool_ Display events as detailed sections instead of table. Does not apply to JSON output. **--follow**, **-f** _bool_ Follow the Workflow Execution progress in real time. Does not apply to JSON output. **--run-id**, **-r** _string_ Run ID. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## signal Send an asynchronous notification (Signal) to a running Workflow Execution by its Workflow ID. The Signal is written to the History. When you include `--input`, that data is available for the Workflow Execution to consume: ``` temporal workflow signal \ --workflow-id YourWorkflowId \ --name YourSignal \ --input '{"YourInputKey": "YourInputValue"}' ``` Visit https://docs.temporal.io/visibility to read more about Search Attributes and Query creation. See `temporal batch --help` for a quick reference. Use the following options to change the behavior of this command. **Flags:** **--input**, **-i** _string[]_ Input value. Use JSON content or set --input-meta to override. Can't be combined with --input-file. Can be passed multiple times to pass multiple arguments. **--input-base64** _bool_ Assume inputs are base64-encoded and attempt to decode them. **--input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --input-meta to override. Can't be combined with --input. Can be passed multiple times to pass multiple arguments. **--input-meta** _string[]_ Input payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. Repeated metadata keys are applied to the corresponding inputs in the provided order. **--name** _string_ Signal name. Required. **--query**, **-q** _string_ Content for an SQL-like `QUERY` List Filter. You must set either --workflow-id or --query. **--reason** _string_ Reason for batch operation. Only use with --query. Defaults to user name. **--rps** _float_ Limit batch's requests per second. Only allowed if query is present. **--run-id**, **-r** _string_ Run ID. Only use with --workflow-id. Cannot use with --query. **--workflow-id**, **-w** _string_ Workflow ID. You must set either --workflow-id or --query. **--yes**, **-y** _bool_ Don't prompt to confirm signaling. Only allowed when --query is present. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## signal-with-start Send an asynchronous notification (Signal) to a Workflow Execution. If the Workflow Execution is not running or is not found, it starts the workflow then sends the signal. ``` temporal workflow signal-with-start \ --signal-name YourSignal \ --signal-input '{"some-key": "some-value"}' \ --workflow-id YourWorkflowId \ --type YourWorkflowType \ --task-queue YourTaskQueue \ --input '{"some-key": "some-value"}' ``` Use the following options to change the behavior of this command. **Flags:** **--cron** _string_ Cron schedule for the Workflow. **--execution-timeout** _duration_ Fail a WorkflowExecution if it lasts longer than `DURATION`. This time-out includes retries and ContinueAsNew tasks. **--fail-existing** _bool_ Fail if the Workflow already exists. **--fairness-key** _string_ Fairness key (max 64 bytes) for proportional task dispatch. Tasks with same key share capacity based on their weight. **--fairness-weight** _float_ Weight [0.001-1000] for this fairness key. Keys are dispatched proportionally to their weights. **--id-conflict-policy** _string-enum_ Determines how to resolve a conflict when spawning a new Workflow Execution with a particular Workflow Id used by an existing Open Workflow Execution. Accepted values: Fail, UseExisting, TerminateExisting. **--id-reuse-policy** _string-enum_ Re-use policy for the Workflow ID in new Workflow Executions. Accepted values: AllowDuplicate, AllowDuplicateFailedOnly, RejectDuplicate, TerminateIfRunning. **--input**, **-i** _string[]_ Input value. Use JSON content or set --input-meta to override. Can't be combined with --input-file. Can be passed multiple times to pass multiple arguments. **--input-base64** _bool_ Assume inputs are base64-encoded and attempt to decode them. **--input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --input-meta to override. Can't be combined with --input. Can be passed multiple times to pass multiple arguments. **--input-meta** _string[]_ Input payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. Repeated metadata keys are applied to the corresponding inputs in the provided order. **--memo** _string[]_ Memo using 'KEY="VALUE"' pairs. Use JSON values. **--priority-key** _int_ Priority key (1-5, lower numbers = higher priority). Tasks in a queue should be processed in close-to-priority-order. Default is 3 when not specified. **--run-timeout** _duration_ Fail a Workflow Run if it lasts longer than `DURATION`. **--search-attribute** _string[]_ Search Attribute in `KEY=VALUE` format. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--signal-input** _string[]_ Signal input value. Use JSON content or set --signal-input-meta to override. Can't be combined with --signal-input-file. Can be passed multiple times to pass multiple arguments. **--signal-input-base64** _bool_ Assume signal inputs are base64-encoded and attempt to decode them. **--signal-input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --signal-input-meta to override. Can't be combined with --signal-input. Can be passed multiple times to pass multiple arguments. **--signal-input-meta** _string[]_ Input signal payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. **--signal-name** _string_ Signal name. Required. **--start-delay** _duration_ Delay before starting the Workflow Execution. Can't be used with cron schedules. If the Workflow receives a signal or update prior to this time, the Workflow Execution starts immediately. **--static-details** _string_ Static Workflow details for human consumption in UIs. Uses Temporal Markdown formatting, may be multiple lines. :::note Option is experimental. ::: **--static-summary** _string_ Static Workflow summary for human consumption in UIs. Uses Temporal Markdown formatting, should be a single line. :::note Option is experimental. ::: **--task-queue**, **-t** _string_ Workflow Task queue. Required. **--task-timeout** _duration_ Fail a Workflow Task if it lasts longer than `DURATION`. This is the Start-to-close timeout for a Workflow Task. (default "10s") **--type** _string_ Workflow Type name. Required. **--workflow-id**, **-w** _string_ Workflow ID. If not supplied, the Service generates a unique ID. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## stack Perform a Query on a Workflow Execution using a `__stack_trace`-type Query. Display a stack trace of the threads and routines currently in use by the Workflow for troubleshooting: ``` temporal workflow stack \ --workflow-id YourWorkflowId ``` Use the following options to change the behavior of this command. **Flags:** **--reject-condition** _string-enum_ Optional flag to reject Queries based on Workflow state. Accepted values: not_open, not_completed_cleanly. **--run-id**, **-r** _string_ Run ID. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## start Start a new Workflow Execution. Returns the Workflow- and Run-IDs: ``` temporal workflow start \ --workflow-id YourWorkflowId \ --type YourWorkflow \ --task-queue YourTaskQueue \ --input '{"some-key": "some-value"}' ``` Use the following options to change the behavior of this command. **Flags:** **--cron** _string_ Cron schedule for the Workflow. **--execution-timeout** _duration_ Fail a WorkflowExecution if it lasts longer than `DURATION`. This time-out includes retries and ContinueAsNew tasks. **--fail-existing** _bool_ Fail if the Workflow already exists. **--fairness-key** _string_ Fairness key (max 64 bytes) for proportional task dispatch. Tasks with same key share capacity based on their weight. **--fairness-weight** _float_ Weight [0.001-1000] for this fairness key. Keys are dispatched proportionally to their weights. **--id-conflict-policy** _string-enum_ Determines how to resolve a conflict when spawning a new Workflow Execution with a particular Workflow Id used by an existing Open Workflow Execution. Accepted values: Fail, UseExisting, TerminateExisting. **--id-reuse-policy** _string-enum_ Re-use policy for the Workflow ID in new Workflow Executions. Accepted values: AllowDuplicate, AllowDuplicateFailedOnly, RejectDuplicate, TerminateIfRunning. **--input**, **-i** _string[]_ Input value. Use JSON content or set --input-meta to override. Can't be combined with --input-file. Can be passed multiple times to pass multiple arguments. **--input-base64** _bool_ Assume inputs are base64-encoded and attempt to decode them. **--input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --input-meta to override. Can't be combined with --input. Can be passed multiple times to pass multiple arguments. **--input-meta** _string[]_ Input payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. Repeated metadata keys are applied to the corresponding inputs in the provided order. **--memo** _string[]_ Memo using 'KEY="VALUE"' pairs. Use JSON values. **--priority-key** _int_ Priority key (1-5, lower numbers = higher priority). Tasks in a queue should be processed in close-to-priority-order. Default is 3 when not specified. **--run-timeout** _duration_ Fail a Workflow Run if it lasts longer than `DURATION`. **--search-attribute** _string[]_ Search Attribute in `KEY=VALUE` format. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--start-delay** _duration_ Delay before starting the Workflow Execution. Can't be used with cron schedules. If the Workflow receives a signal or update prior to this time, the Workflow Execution starts immediately. **--static-details** _string_ Static Workflow details for human consumption in UIs. Uses Temporal Markdown formatting, may be multiple lines. :::note Option is experimental. ::: **--static-summary** _string_ Static Workflow summary for human consumption in UIs. Uses Temporal Markdown formatting, should be a single line. :::note Option is experimental. ::: **--task-queue**, **-t** _string_ Workflow Task queue. Required. **--task-timeout** _duration_ Fail a Workflow Task if it lasts longer than `DURATION`. This is the Start-to-close timeout for a Workflow Task. (default "10s") **--type** _string_ Workflow Type name. Required. **--workflow-id**, **-w** _string_ Workflow ID. If not supplied, the Service generates a unique ID. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## start-update-with-start Send a message to a Workflow Execution to invoke an Update handler, and wait for the update to be accepted or rejected. If the Workflow Execution is not running, then a new workflow execution is started and the update is sent. Experimental. ``` temporal workflow start-update-with-start \ --update-name YourUpdate \ --update-input '{"update-key": "update-value"}' \ --update-wait-for-stage accepted \ --workflow-id YourWorkflowId \ --type YourWorkflowType \ --task-queue YourTaskQueue \ --id-conflict-policy Fail \ --input '{"wf-key": "wf-value"}' ``` Use the following options to change the behavior of this command. **Flags:** **--cron** _string_ Cron schedule for the Workflow. **--execution-timeout** _duration_ Fail a WorkflowExecution if it lasts longer than `DURATION`. This time-out includes retries and ContinueAsNew tasks. **--fail-existing** _bool_ Fail if the Workflow already exists. **--fairness-key** _string_ Fairness key (max 64 bytes) for proportional task dispatch. Tasks with same key share capacity based on their weight. **--fairness-weight** _float_ Weight [0.001-1000] for this fairness key. Keys are dispatched proportionally to their weights. **--id-conflict-policy** _string-enum_ Determines how to resolve a conflict when spawning a new Workflow Execution with a particular Workflow Id used by an existing Open Workflow Execution. Accepted values: Fail, UseExisting, TerminateExisting. **--id-reuse-policy** _string-enum_ Re-use policy for the Workflow ID in new Workflow Executions. Accepted values: AllowDuplicate, AllowDuplicateFailedOnly, RejectDuplicate, TerminateIfRunning. **--input**, **-i** _string[]_ Input value. Use JSON content or set --input-meta to override. Can't be combined with --input-file. Can be passed multiple times to pass multiple arguments. **--input-base64** _bool_ Assume inputs are base64-encoded and attempt to decode them. **--input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --input-meta to override. Can't be combined with --input. Can be passed multiple times to pass multiple arguments. **--input-meta** _string[]_ Input payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. Repeated metadata keys are applied to the corresponding inputs in the provided order. **--memo** _string[]_ Memo using 'KEY="VALUE"' pairs. Use JSON values. **--priority-key** _int_ Priority key (1-5, lower numbers = higher priority). Tasks in a queue should be processed in close-to-priority-order. Default is 3 when not specified. **--run-id**, **-r** _string_ Run ID. If unset, looks for an Update against the currently-running Workflow Execution. **--run-timeout** _duration_ Fail a Workflow Run if it lasts longer than `DURATION`. **--search-attribute** _string[]_ Search Attribute in `KEY=VALUE` format. Keys must be identifiers, and values must be JSON values. For example: `'YourKey={"your": "value"}'`. Can be passed multiple times. **--start-delay** _duration_ Delay before starting the Workflow Execution. Can't be used with cron schedules. If the Workflow receives a signal or update prior to this time, the Workflow Execution starts immediately. **--static-details** _string_ Static Workflow details for human consumption in UIs. Uses Temporal Markdown formatting, may be multiple lines. :::note Option is experimental. ::: **--static-summary** _string_ Static Workflow summary for human consumption in UIs. Uses Temporal Markdown formatting, should be a single line. :::note Option is experimental. ::: **--task-queue**, **-t** _string_ Workflow Task queue. Required. **--task-timeout** _duration_ Fail a Workflow Task if it lasts longer than `DURATION`. This is the Start-to-close timeout for a Workflow Task. (default "10s") **--type** _string_ Workflow Type name. Required. **--update-first-execution-run-id** _string_ Parent Run ID. The update is sent to the last Workflow Execution in the chain started with this Run ID. **--update-id** _string_ Update ID. If unset, defaults to a UUID. **--update-input** _string[]_ Update input value. Use JSON content or set --update-input-meta to override. Can't be combined with --update-input-file. Can be passed multiple times to pass multiple arguments. **--update-input-base64** _bool_ Assume update inputs are base64-encoded and attempt to decode them. **--update-input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --update-input-meta to override. Can't be combined with --update-input. Can be passed multiple times to pass multiple arguments. **--update-input-meta** _string[]_ Input update payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. **--update-name** _string_ Update name. Required. **--update-wait-for-stage** _string-enum_ Update stage to wait for. The only option is `accepted`, but this option is required. This is to allow a future version of the CLI to choose a default value. Required. Accepted values: accepted. **--workflow-id**, **-w** _string_ Workflow ID. If not supplied, the Service generates a unique ID. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## terminate Terminate a Workflow Execution: ``` temporal workflow terminate \ --reason YourReasonForTermination \ --workflow-id YourWorkflowId ``` The reason is optional and defaults to the current user's name. The reason is stored in the Event History as part of the `WorkflowExecutionTerminated` event. This becomes the closing Event in the Workflow Execution's history. Executions may be terminated in bulk via a visibility Query list filter: ``` temporal workflow terminate \ --query YourQuery \ --reason YourReasonForTermination ``` Workflow code cannot see or respond to terminations. To perform clean-up work in your Workflow code, use `temporal workflow cancel` instead. Visit https://docs.temporal.io/visibility to read more about Search Attributes and Query creation. See `temporal batch --help` for a quick reference. Use the following options to change the behavior of this command. **Flags:** **--query**, **-q** _string_ Content for an SQL-like `QUERY` List Filter. You must set either --workflow-id or --query. **--reason** _string_ Reason for termination. Defaults to message with the current user's name. **--rps** _float_ Limit batch's requests per second. Only allowed if query is present. **--run-id**, **-r** _string_ Run ID. Can only be set with --workflow-id. Do not use with --query. **--workflow-id**, **-w** _string_ Workflow ID. You must set either --workflow-id or --query. **--yes**, **-y** _bool_ Don't prompt to confirm termination. Can only be used with --query. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## trace Display the progress of a Workflow Execution and its child workflows with a real-time trace. This view helps you understand how Workflows are proceeding: ``` temporal workflow trace \ --workflow-id YourWorkflowId ``` Use the following options to change the behavior of this command. **Flags:** **--concurrency** _int_ Number of Workflow Histories to fetch at a time. (default "10") **--depth** _int_ Set depth for your Child Workflow fetches. Pass -1 to fetch child workflows at any depth. (default "-1") **--fold** _string[]_ Fold away Child Workflows with the specified statuses. Case-insensitive. Ignored if --no-fold supplied. Available values: running, completed, failed, canceled, terminated, timedout, continueasnew. Can be passed multiple times. **--no-fold** _bool_ Disable folding. Fetch and display Child Workflows within the set depth. **--run-id**, **-r** _string_ Run ID. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## update An Update is a synchronous call to a Workflow Execution that can change its state, control its flow, and return a result. ### describe Given a Workflow Execution and an Update ID, return information about its current status, including a result if it has finished. ``` temporal workflow update describe \ --workflow-id YourWorkflowId \ --update-id YourUpdateId ``` Use the following options to change the behavior of this command. **Flags:** **--run-id**, **-r** _string_ Run ID. If unset, updates the currently-running Workflow Execution. **--update-id** _string_ Update ID. Must be unique per Workflow Execution. Required. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### execute Send a message to a Workflow Execution to invoke an Update handler, and wait for the update to complete or fail. You can also use this to wait for an existing update to complete, by submitting an existing update ID. ``` temporal workflow update execute \ --workflow-id YourWorkflowId \ --name YourUpdate \ --input '{"some-key": "some-value"}' ``` Use the following options to change the behavior of this command. **Flags:** **--first-execution-run-id** _string_ Parent Run ID. The update is sent to the last Workflow Execution in the chain started with this Run ID. **--input**, **-i** _string[]_ Input value. Use JSON content or set --input-meta to override. Can't be combined with --input-file. Can be passed multiple times to pass multiple arguments. **--input-base64** _bool_ Assume inputs are base64-encoded and attempt to decode them. **--input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --input-meta to override. Can't be combined with --input. Can be passed multiple times to pass multiple arguments. **--input-meta** _string[]_ Input payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. Repeated metadata keys are applied to the corresponding inputs in the provided order. **--name** _string_ Handler method name. Required. **--run-id**, **-r** _string_ Run ID. If unset, looks for an Update against the currently-running Workflow Execution. **--update-id** _string_ Update ID. If unset, defaults to a UUID. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### result Given a Workflow Execution and an Update ID, wait for the Update to complete or fail and print the result. ``` temporal workflow update result \ --workflow-id YourWorkflowId \ --update-id YourUpdateId ``` Use the following options to change the behavior of this command. **Flags:** **--run-id**, **-r** _string_ Run ID. If unset, updates the currently-running Workflow Execution. **--update-id** _string_ Update ID. Must be unique per Workflow Execution. Required. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ### start Send a message to a Workflow Execution to invoke an Update handler, and wait for the update to be accepted or rejected. You can subsequently wait for the update to complete by using `temporal workflow update execute`. ``` temporal workflow update start \ --workflow-id YourWorkflowId \ --name YourUpdate \ --input '{"some-key": "some-value"}' --wait-for-stage accepted ``` Use the following options to change the behavior of this command. **Flags:** **--first-execution-run-id** _string_ Parent Run ID. The update is sent to the last Workflow Execution in the chain started with this Run ID. **--input**, **-i** _string[]_ Input value. Use JSON content or set --input-meta to override. Can't be combined with --input-file. Can be passed multiple times to pass multiple arguments. **--input-base64** _bool_ Assume inputs are base64-encoded and attempt to decode them. **--input-file** _string[]_ A path or paths for input file(s). Use JSON content or set --input-meta to override. Can't be combined with --input. Can be passed multiple times to pass multiple arguments. **--input-meta** _string[]_ Input payload metadata as a `KEY=VALUE` pair. When the KEY is "encoding", this overrides the default ("json/plain"). Can be passed multiple times. Repeated metadata keys are applied to the corresponding inputs in the provided order. **--name** _string_ Handler method name. Required. **--run-id**, **-r** _string_ Run ID. If unset, looks for an Update against the currently-running Workflow Execution. **--update-id** _string_ Update ID. If unset, defaults to a UUID. **--wait-for-stage** _string-enum_ Update stage to wait for. The only option is `accepted`, but this option is required. This is to allow a future version of the CLI to choose a default value. Required. Accepted values: accepted. **--workflow-id**, **-w** _string_ Workflow ID. Required. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. ## update-options ``` +---------------------------------------------------------------------+ | CAUTION: Worflow update-options is experimental. Workflow Execution | | properties are subject to change. | +---------------------------------------------------------------------+ ``` Modify properties of Workflow Executions: ``` temporal workflow update-options [options] ``` It can override the Worker Deployment configuration of a Workflow Execution, which controls Worker Versioning. For example, to force Workers in the current Deployment execute the next Workflow Task change behavior to `auto_upgrade`: ``` temporal workflow update-options \ --workflow-id YourWorkflowId \ --versioning-override-behavior auto_upgrade ``` or to pin the workflow execution to a Worker Deployment, set behavior to `pinned`: ``` temporal workflow update-options \ --workflow-id YourWorkflowId \ --versioning-override-behavior pinned \ --versioning-override-deployment-name YourDeploymentName \ --versioning-override-build-id YourDeploymentBuildId ``` To remove any previous overrides, set the behavior to `unspecified`: ``` temporal workflow update-options \ --workflow-id YourWorkflowId \ --versioning-override-behavior unspecified ``` To see the current override use `temporal workflow describe` Use the following options to change the behavior of this command. **Flags:** **--query**, **-q** _string_ Content for an SQL-like `QUERY` List Filter. You must set either --workflow-id or --query. **--reason** _string_ Reason for batch operation. Only use with --query. Defaults to user name. **--rps** _float_ Limit batch's requests per second. Only allowed if query is present. **--run-id**, **-r** _string_ Run ID. Only use with --workflow-id. Cannot use with --query. **--versioning-override-behavior** _string-enum_ Override the versioning behavior of a Workflow. Required. Accepted values: unspecified, pinned, auto_upgrade. **--versioning-override-build-id** _string_ When overriding to a `pinned` behavior, specifies the Build ID of the version to target. **--versioning-override-deployment-name** _string_ When overriding to a `pinned` behavior, specifies the Deployment Name of the version to target. **--workflow-id**, **-w** _string_ Workflow ID. You must set either --workflow-id or --query. **--yes**, **-y** _bool_ Don't prompt to confirm signaling. Only allowed when --query is present. **Global Flags:** **--address** _string_ Temporal Service gRPC endpoint. (default "localhost:7233") **--api-key** _string_ API key for request. **--client-authority** _string_ Temporal gRPC client :authority pseudoheader. **--client-connect-timeout** _duration_ The client connection timeout. 0s means no timeout. **--codec-auth** _string_ Authorization header for Codec Server requests. **--codec-endpoint** _string_ Remote Codec Server endpoint. **--codec-header** _string[]_ HTTP headers for requests to codec server. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. **--color** _string-enum_ Output coloring. Accepted values: always, never, auto. (default "auto") **--command-timeout** _duration_ The command execution timeout. 0s means no timeout. **--config-file** _string_ File path to read TOML config from, defaults to `$CONFIG_PATH/temporal/temporal.toml` where `$CONFIG_PATH` is defined as `$HOME/.config` on Unix, "$HOME/Library/Application Support" on macOS, and %AppData% on Windows. :::note Option is experimental. ::: **--disable-config-env** _bool_ If set, disables loading environment config from environment variables. :::note Option is experimental. ::: **--disable-config-file** _bool_ If set, disables loading environment config from config file. :::note Option is experimental. ::: **--env** _string_ Active environment name (`ENV`). (default "default") **--env-file** _string_ Path to environment settings file. Defaults to `$HOME/.config/temporalio/temporal.yaml`. **--grpc-meta** _string[]_ HTTP headers for requests. Format as a `KEY=VALUE` pair. May be passed multiple times to set multiple headers. Can also be made available via environment variable as `TEMPORAL_GRPC_META_[name]`. **--identity** _string_ The identity of the user or client submitting this request. Defaults to "temporal-cli:$USER@$HOST". **--log-format** _string-enum_ Log format. Accepted values: text, json. (default "text") **--log-level** _string-enum_ Log level. Default is "info" for most commands and "warn" for `server start-dev`. Accepted values: debug, info, warn, error, never. (default "info") **--namespace**, **-n** _string_ Temporal Service Namespace. (default "default") **--no-json-shorthand-payloads** _bool_ Raw payload output, even if the JSON option was used. **--output**, **-o** _string-enum_ Non-logging data output format. Accepted values: text, json, jsonl, none. (default "text") **--profile** _string_ Profile to use for config file. :::note Option is experimental. ::: **--time-format** _string-enum_ Time format. Accepted values: relative, iso, raw. (default "relative") **--tls** _bool_ Enable base TLS encryption. Does not have additional options like mTLS or client certs. This is defaulted to true if api-key or any other TLS options are present. Use --tls=false to explicitly disable. **--tls-ca-data** _string_ Data for server CA certificate. Can't be used with --tls-ca-path. **--tls-ca-path** _string_ Path to server CA certificate. Can't be used with --tls-ca-data. **--tls-cert-data** _string_ Data for x509 certificate. Can't be used with --tls-cert-path. **--tls-cert-path** _string_ Path to x509 certificate. Can't be used with --tls-cert-data. **--tls-disable-host-verification** _bool_ Disable TLS host-name verification. **--tls-key-data** _string_ Private certificate key data. Can't be used with --tls-key-path. **--tls-key-path** _string_ Path to x509 private key. Can't be used with --tls-key-data. **--tls-server-name** _string_ Override target TLS server name. --- ## Audit Logs - AWS Kinesis ## Configure Audit Logs using AWS Kinesis {#configure-audit-log} To set up Audit Logs, you must have an Amazon Web Services (AWS) account and set up Kinesis Data Streams. 1. If you don't have an AWS account, follow the instructions from AWS in [Create and activate an AWS account](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/). 2. To set up Kinesis Data Streams, open the [AWS Management Console](https://aws.amazon.com/console/), search for Kinesis, and start the setup process. You can use [this AWS CloudFormation template](https://temporal-auditlogs-config.s3.us-west-2.amazonaws.com/cloudformation/iam-role-for-temporal-audit-logs.yaml) to create an IAM role with access to a Kinesis stream you have in your account. Be aware that Kinesis has a rate limit of 1,000 messages per second and quotas for both the number of records written and the size of the records. For more information, see [Why is my Kinesis data stream throttling?](https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-data-stream-throttling/) ### Create an Audit Log sink 1. In the Temporal Cloud UI, select **Settings**. 1. On the **Settings** page, select **Audit Logs**. 1. In the **Audit Log Integration** card, select **Setup**. 1. On the **Audit Log Integration** page, choose your **Access method** (either **Auto** or **Manual**). - **Auto:** Configure the AWS CloudFormation stack in your AWS account from the Cloud UI. - **Manual:** Use a generated AWS CloudFormation template to set up Kinesis manually. 1. In **Kinesis ARN**, paste the Kinesis ARN from your AWS account. 1. In **Role name**, provide a name for a new IAM Role. 1. In **Select an AWS region**, select the appropriate region for your Kinesis stream. If you chose the **Auto** access method, continue with the following steps: 1. Select **Save and launch stack**. 1. In **Stack name** in the AWS CloudFormation console, specify a name for the stack. 1. In the lower-right corner of the page, select **Create stack**. If you chose the **Manual** access method, continue with the following steps: 1. Select **Save and download template**. 1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/). 1. Select **Create Stack**. 1. On the **Create stack** page, select **Template is ready** and **Update a template file**. 1. Select **Choose file** and specify the template you generated in step 1. 1. Select **Next** on this page and on the next two pages. 1. On the **Review** page, select **Create stack**. To ensure that Audit Logs can flow into the Kinesis stream, you can use the **Verify** button to confirm it is set up correctly. This validates that Temporal can successfully write to your stream. If everything is configured correctly, you will see a `Success` status indicating Temporal has written to the kinesis stream. ## Consume an Audit Log {#consume-an-audit-log} **How to consume an Audit Log** After you create an Audit Log sink, wait for the logs to flow into the Kinesis stream. You will see the first logs within 10 minutes after you configure the sink. :::note You must configure and implement your own consumer of the Kinesis stream. For an example, see [Example of consuming an Audit Log](#example-of-consuming-an-audit-log). ::: ### Example of consuming an Audit Log The following Go code is an example of consuming Audit Logs from a Kinesis stream and delivering them to an S3 bucket. ```go func main() { fmt.Println("print audit log from S3") cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithSharedConfigProfile("your_profile"), ) if err != nil { fmt.Println(err) } s3Client := s3.NewFromConfig(cfg) response, err := s3Client.GetObject( context.Background(), &s3.GetObjectInput{ Bucket: aws.String("your_bucket_name"), Key: aws.String("your_s3_file_path")}) if err != nil { fmt.Println(err) } defer response.Body.Close() content, err := io.ReadAll(response.Body) fmt.Println(string(content)) } ``` The preceding code also prints the logs in the terminal. The following is a sample result. ```json { "emit_time": "2023-11-14T07:56:55Z", "level": "LOG_LEVEL_INFO", "caller_ip_address": "10.1.2.3, 10.4.5.6", "user_email": "user1@example.com", "operation": "DeleteUser", "details": { "target_users": ["d7dca96f-adcc-417d-aafc-e8f5d2ba9fe1"], "search_attribute_update": {} }, "status": "OK", "category": "LOG_CATEGORY_ADMIN", "log_id": "0mc69c0323b871293ce231dd1c7fb639", "request_id": "445297d3-43a7-4793-8a04-1b1dd1999640", "principal": { "id": "988cb80b-d6be-4bb5-9c87-d09f93f58ed3", "type": "user", "name": "user1@example.com" } } ``` --- ## Audit Logs - GCP Pub/Sub ## Manual Setup Prerequisites :::note These steps are only required for manual setup. If you use Terraform for your deployment, you don't need to complete these prerequisites. ::: Before configuring the manual Audit Log sink, complete the following steps in Google Cloud: 1. Create a Pub/Sub topic and make a note of its topic name, such as `test-auditlog`. 1. Set up a service account in the same project in Google Cloud and follow the instructions in the Temporal Cloud UI to configure the permissions for that service account. ## Create an Audit Log sink 1. In the Temporal Cloud UI, select **Settings**. 1. On the **Settings** page, select **Audit Logs**. 1. In the **Audit Logs Integration** card, select **Setup**. 1. On the **Audit Log Integration** page, select **Pub/Sub**. 1. In the **service account email** field, enter the email of the service account you created in the prerequisites. 1. In the **Topic name** field, enter the topic name of the Pub/Sub topic you created in the prerequisites. 1. There are two ways to configure the service account to write to the Pub/Sub sink: select **Manual** to configure the account manually, or **Deploy with Terraform** to use Terraform. If you use Terraform, you don't need to complete the prerequisite steps above. 1. Follow the instructions in the Temporal Cloud UI for the method you chose. 1. To ensure that audit logs can reach your Pub/Sub topic, you can use the **Verify** button to confirm it is set up correctly. This validates that Temporal can successfully write to your topic. If everything is configured correctly, you will see a `Success` status indicating Temporal has written to the Pub/Sub topic. 1. Click **Create** to configure the audit log. Audit Logs will begin to show up in Pub/Sub within 10 minutes ![Temporal Cloud UI Setup for Audit Logs with GCP Pub/Sub](/img/cloud/gcp/audit-logging-pub-sub-gcp.png) :::info MORE INFORMATION For more details, refer to [Audit Logs with Temporal Cloud](https://docs.temporal.io/cloud/audit-logs). ::: --- ## Audit Logs Audit Logs is a feature of [Temporal Cloud](/cloud/overview) that provides forensic access information for a variety of operations in the Temporal Cloud control plane. Audit Logs answers "who, when, and what" questions about Temporal Cloud resources. These answers can help you evaluate the security of your organization, and they can provide information that you need to satisfy audit and compliance requirements. You need the Account Owner or Global Administrator role to view Audit Logs via UI, use the API, or to configure an Audit Log Integration with [AWS Kinesis](/cloud/audit-logs-aws) or [GCP Pub/Sub](/cloud/audit-logs-gcp). :::info Audit Logs do NOT capture data plane events, like Workflow Start, Workflow Terminate, Schedule Create, etc. Instead, explore the [Export](/cloud/export) feature, which does let you send closed Workflow Histories to external storage. ::: ## Which events are supported by Audit Logs? {#supported-events} - Account - `ChangeAccountPlanType`: Change Account Plan Type - `UpdateAccountAPI`: Configure Audit Logs, Configure Observability Endpoint - API Keys - `CreateAPIKey`: Create API Key - `DeleteAPIKey`: Delete API Key - `UpdateAPIKey`: Update API Key - Connectivity Rules - `CreateConnectivityRule`: Create Connectivity Rule - `DeleteConnectivityRule`: Delete Connectivity Rule - Namespace - `CreateNamespaceAPI`: Create Namespace - `DeleteNamespaceAPI`: Delete Namespace - `FailoverNamespacesAPI`: Failover (for High Availability Namespaces) - `RenameCustomSearchAttributeAPI`: Rename Custom Search Attribute - `UpdateNamespaceAPI`: Includes retention period changes, replica edits, authentication method updates, custom search attribute updates, and connectivity rule bindings - Namespace Export - `CreateNamespaceExportSink`: Create Namespace Export Sink - `DeleteNamespaceExportSink`: Delete Namespace Export Sink - `UpdateNamespaceExportSink`: Update Namespace Export Sink - `ValidateNamespaceExportSink`: Validate Namespace Export Sink - Nexus Endpoint - `CreateNexusEndpoint`: Create Nexus Endpoint - `DeleteNexusEndpoint`: Delete Nexus Endpoint - `UpdateNexusEndpoint`: Update Nexus Endpoint - Service Accounts - `CreateServiceAccount`: Create Service Account - `CreateServiceAccountAPIKey`: Create Service Account API Key - `DeleteServiceAccount`: Delete Service Account - `UpdateServiceAccount`: Update Service Account - User - `CreateUserAPI`: Create Users - `DeleteUserAPI`: Delete Users - `InviteUsersAPI`: Invite Users - `SetUserNamespaceAccessAPI`: Set User Namespace Access - `UpdateIdentityNamespacePermissionsAPI`: Update Identity Namespace Permissions - `UpdateUserAPI`: Update User Account-level Roles - `UpdateUserNamespacePermissionsAPI`: Update User Namespace Permissions - User Groups - `CreateUserGroup`: Create User Group - `DeleteUserGroup`: Delete User Group - `SetUserGroupNamespaceAccess`: Set User Group Namespace Access - `UpdateUserGroup`: Update User Group ### Audit Log format :::info DEPRECATION NOTICE The following fields are deprecated and are planned for removal on or after April 1 2026. - `user_email`. This field is duplicated by `principal.name` for principals of type `user`. Other principal types do not have associated emails. - `level`. This field is duplicated by `status`. - `caller_ip_address`. This field is replaced by `x_forwarded_for`. - `details`. This field is replaced by `raw_details` that includes request details. - `category`. This field is no longer used. ::: Audit Logs use the following JSON format: ```json { "operation": // Operation that was performed "principal": // Information about who initiated the operation "details": // DEPRECATED, see raw_details "raw_details": // details about the request "user_email": // DEPRECATED, use principal.user where applicable "x_forwarded_for": // the IP address making the call "caller_ip_address": // DEPRECATED, use x_forwarded_for "category": // DEPRECATED, no longer used "emit_time": // Time the operation was recorded "level": // DEPRECATED, use status "log_id": // Unique ID of the log entry "request_id": // Optional async request id set by the user when sending a request "status": // Status, such as OK or ERROR "version": // Version of the log entry } ``` :::note The [`X-Forwarded-For`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For) format is a comma-separated list of IP addresses which should be evaluated from the last to the first, until meeting the first untrusted IP address of the list. This allows for instance to consider proxies in the path. Temporal provides the caller IP address in that format to allow customers to identify a caller IP address even if one (or more proxies) are in the network path to reach Temporal Cloud. ::: ### Example of an Audit Log ```json [ { "operation": "UserLogin", "status": "OK", "version": 2, "logId": "edb3aa3e-78c4-48fc-9c7e-2078c6989775", "xForwardedFor": "10.1.2.3", "asyncOperationId": "", "emitTime": { "$typeName": "google.protobuf.Timestamp", "seconds": 1759436617, "nanos": 48000000 }, "principal": { "type": "user", "id": "", "name": "user@email.com", "apiKeyId": "" } }, { "operation": "UserLogin", "status": "OK", "version": 2, "logId": "5fe6a81e-8d3c-4f4d-88a5-52db864c9ea5", "xForwardedFor": "10.1.2.3", "asyncOperationId": "", "emitTime": { "seconds": 1759178573, "nanos": 671000000 }, "principal": { "type": "user", "id": "", "name": "user@email.com", "apiKeyId": "" } } ] ``` ## How to configure an Audit Log Integration {#configure-audit-logs} Audit Logs can be configured in AWS Kinesis or GCP Pub/Sub. - [AWS Kinesis Instructions](/cloud/audit-logs-aws) - [GCP Pub/Sub Instructions](/cloud/audit-logs-gcp) ## How to troubleshoot Audit Log sink {#troubleshoot-audit-logs} The Audit Logs page of the Temporal Cloud UI provides the current status of an Audit Log Integration. - If an error is detected, a summary of the error appears below the page title. - If the Audit Log Integration is functioning normally, an **On** badge appears next to the page heading. After an Admin Operation is performed, users can see Audit Log messages flow through the stream. Upon successful configuration of the Audit Log sink and set up of a stream, you will receive events within the hour of setup. Temporal is able to retain Audit Log information for up to 30 days. To retrieve logs up to the past 30 days, you will need to file a request. If you experience an issue with an Audit Log sink, we can provide the missing audit information. Open a support ticket to request assistance. ## How to delete an Audit Log sink {#delete-an-audit-log-sink} To delete an Audit Log sink, follow these steps: 1. In the Temporal Cloud UI, select **Settings**. 1. On the **Settings** page, select **Audit Logs**. 1. In the **Audit Logs Integration** card, select **Edit**. 1. At the bottom of the **Audit Logs Integration** page, choose **Delete**. After you confirm the deletion, the Audit Log Sink is removed from your account and logs stop flowing to your stream. ## View an Audit Log {#view-an-audit-log} An Audit Log can be viewed in the Temporal Cloud UI. 1. In the Temporal Cloud UI, select **Settings**. 1. On the **Settings** page, select **Audit Logs**. Up to 1000 events can be downloaded from the Audit Log UI to a local file. ## Access an Audit Log via API {#audit-log-api} An Audit Log can be accessed using the [Temporal Cloud Ops API](/ops). Use the API to access an Audit Log if you wish to make dashboards for viewing an Audit Log outside of Temporal Cloud. If your goal is to export an Audit Log, it is better to use an Audit Log sink and capture each entry as it is generated. Audit Logs are accessible for the past 30 days using the API. The API allows: - StartTimeInclusive: Filter for UTC time >= (defaults to 30 days ago) - optional - EndTimeExclusive: Filter for UTC time < (defaults to current time) - optional - PageSize: Cannot exceed 1000. Defaults to 100. - optional - PageToken: The page token if this is continuing from another response - optional --- ## Exporting Workflow Event History to AWS S3 ## Prerequisites Before configuring the Export Sink, ensure you have the following: - An AWS S3 bucket. - The S3 bucket must reside in the same region as your Namespace. - (Optional) An IAM role that has write permission to the above S3 bucket. - You can follow the automation in the UI to create the IAM role. Please pre-create the role if setting up Export via terraform/tcld. - (Optional) A KMS ARN associated with the S3 bucket. ## Configure Workflow History export There are multiple ways to configure export: through the [Temporal Cloud UI](#using-temporal-cloud-ui), [`tcld`](#using-tcld), or [`terraform`](#using-terraform). ### Using Temporal Cloud UI You can use the Temporal Cloud UI to configure the Workflow History Export. The Temporal Cloud UI provides two ways for configuring Workflow History Export: - [Automated setup](#automated-setup) (recommended): The Cloud UI launches the AWS CloudFormation Console to create a stack with write permission to the S3 bucket. - [Manual setup](#manual-setup): The Cloud UI provides a CloudFormation template for users to manually configure a CloudFormation stack. :::info Why does Temporal Cloud provision multiple internal IAM roles to trust for Export? Temporal Cloud creates multiple intermediary IAM roles for export operations for security purposes. The system randomly selects from these roles when writing to your storage sink, which provides several benefits: - **Security isolation**: If one IAM role is compromised or needs to be decommissioned, other IAM roles remain available - **Load distribution**: Avoids relying on a single IAM role, reducing security risk - **Warm standby**: Keeps multiple IAM roles active to avoid potential throttling when switching between IAM roles - **Reliability**: Provides resilience against cloud provider account-level issues that could affect a single IAM role This approach prioritizes security and availability, ensuring robust export operations even if individual IAM roles encounter issues. ::: The following steps guide you through setting up Workflow History Export using the Temporal Cloud UI. ![](/img/cloud/gcp/export-sink-ui.png) :::tip Don't forget to click **Create** at the end of your setup to confirm your export. ::: #### Automated setup You can use the automated setup to create a CloudFormation stack with write permission to your S3 bucket. Make sure to verify the export setup before you save the configuration. 1. Open the Temporal Cloud UI and navigate to the Namespace you want to configure. 2. Select **Configure** from the **Export** card. 3. Provide the following information to configure the export sink and then select **Create and launch stack**: - Name: A name for the export sink. - AWS S3 Bucket Name: The name of the configured AWS S3 bucket to send Closed Workflow Histories to. - AWS Account ID: The AWS account ID. - Role Name: The name of the AWS IAM role to use for the CloudFormation stack that has write permission to the S3 bucket. - KMS ARN: (optional) The ARN of the AWS KMS key to use for encryption of the exported Workflow History. 4. You will be taken to the CloudFormation Console to create the stack with pre-populated information. - Review the information and then select **Create stack**. #### Manual setup You can manually configure a CloudFormation stack using the provided template. 1. Open the Temporal Cloud UI and navigate to the Namespace you want to configure. 2. Select **Configure** from the **Export** card. 3. Select **Manual** from **Access method**. - Enter the Template URL into your web browser to download your copy of the CloudFormation template. - Configure the CloudFormation template for your export sink. - Follow the steps in the [AWS documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-console-create-stack-template.html) by uploading the template to the CloudFormation console. ### Using `tcld` Run the `tcld namespace export s3 create` command and provide the following information: - `--namespace`: The Namespace to configure export for. - `--sink-name`: The name of the export sink. - `--role-arn`: The ARN of the AWS IAM role to use for the CloudFormation stack that has write permission to the S3 bucket. - `--s3-bucket-name`: The name of the AWS S3 bucket. For example: ```command tcld namespace export s3 create --namespace "your-namespace.your-account" --sink-name "your-sink-name" --role-arn "arn:aws:iam::123456789012:role/test-sink" --s3-bucket-name "your-aws-s3-bucket-name" ``` Retrieve the status of this command by running the `tcld namespace export s3 get` command. For example: ```command tcld namespace export s3 get --namespace "your-namespace.your-account" --sink-name "your-sink-name" ``` The following is an example of the output: ```json { "name": "your-sink-name", "resourceVersion": "a6442895-1c07-4da4-aaca-58d57d338345", "state": "Active", "spec": { "name": "your-sink-name", "enabled": true, "destinationType": "S3", "s3Sink": { "roleName": "your-export-test", "bucketName": "your-export-test", "region": "us-east-1", "kmsArn": "", "awsAccountId": "123456789012" } }, "health": "Ok", "errorMessage": "", "latestDataExportTime": "0001-01-01T00:00:00Z", "lastHealthCheckTime": "2023-08-14T21:30:02Z" } ``` ### Using `terraform` See the [terraform export support](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/namespace_export_sink) for setup instructions. ### Next Steps - [Verify export setup](/cloud/export#verify) - [Monitor export progress](/cloud/export#monitor) - [Work with exported files](/cloud/export#working-with-exported-files) --- ## Capacity Modes Each Namespace in Temporal has a rate limit, which is measure in [Actions](/cloud/pricing#action) per second. Temporal offers two different modes for adjusting capacity: On-Demand Capacity or Provisioned Capacity. With On-Demand Capacity, Namespace capacity is increased automatically along with usage. With Provisioned Capacity, you can control your capacity limits by requesting Temporal Resource Units (TRUs). ## Namespace Capacity Namespaces in Temporal can be set to either an **On-Demand** or **Provisioned Capacity** Mode. These modes govern how limits are assigned to a Namespace. Actions Per Second (APS) is the primary limit for Namespaces and is based on the operating billable Actions that occur each second. Some Actions can result in multiple back-end operations, so limits are also set on Requests Per Second (RPS) and Operations Per Second (OPS) to maintain reliability. See [Service-level RPS limits](/references/dynamic-configuration#service-level-rps-limits) for more about RPS. See the [operations list](/references/operation-list) for the list of operations. See the [Actions page](/cloud/actions) for the list of actions. :::tip Measuring throughput with APS, RPS, and OPS APS, RPS, and OPS are all measures of throughput that apply to different aspects of Temporal. APS, or Actions Per Second, is specific to Temporal Cloud. It measures the rate at which Actions, like starting or signaling a Workflow, can be performed in a specific Namespace. Temporal Cloud uses APS to protect the system from sudden major spikes in load. RPS, or Requests Per Second, is used in the Temporal Service, both in self-hosted Temporal and Temporal Cloud. It measures and controls the rate of gRPC requests to the Service. This is a lower-level measure that manages rates at the service level. OPS, or Operations per Second, is used by Temporal Cloud. An operation is anything a user does directly, or that Temporal does on behalf of the user in the background, that results in load on Temporal Server. This is a lower-level measure that manages rates across Temporal cloud services. In summary, APS is a higher-level measure to limit and mitigate Action spikes in Temporal Cloud. RPS and OPS are lower-level measures to control and balance request rates at the service level. ::: ### What happens when my Actions Rate exceeds my Limit? When your Action rate exceeds your quota, Temporal Cloud throttles Actions until the rate matches your quota. Throttling means limiting the rate at which Actions are performed to prevent the Namespace from exceeding its APS limit. Your work is never lost and will continue at the limited pace until APS returns below the limit. Your rate limits can be adjusted automatically over time or provisioned manually with Capacity Modes. We recommend tracking your Actions Rate and Limits using Temporal metrics to assess your use cases specific needs. See [Monitoring Trends Against Limits](/cloud/service-health#rps-aps-rate-limits) to track usage trends. :::note Actions that don't count against APS Actions that are external to the core Temporal service do not contribute to your APS. These Calls include: * [Export](/cloud/export) * Capacity Related Actions ::: ## On-Demand Capacity {#on-demand-capacity} Using On-Demand Capacity, your rate limit grows automatically along with your usage. | | Actions Per Second | Requests Per Second | Operations Per Second| |---------------|--------------------|---------------------|----------------------| | Default Limit | 500 | 2000 | 4000 | Scaling automatically adjusts based on the lesser of 4 * APS Average or 2 * APS P90 over the past 7 days. If you experience usage spikes, you may hit a throughput limit. In that case, consider switching to [Provisioned Capacity](#provisioned-capacity). You can also optimize your workload to remain under the On-Demand limits. See [Best Practices for Managing APS Limits](/best-practices/managing-aps-limits) for more information. ### What kind of throughput can I get on Temporal Cloud with On-Demand Capacity? Each Namespace has a rate limit, which is measured in Actions per second (APS). A Namespace's default limit is set at 500 APS and automatically adjusts based on a formula that compares your average usage over the last 7 days and your usage at the 90th percentile, or P90. Your throughput limit will never fall below the default value. Under On-Demand capacity you are only charged for the Actions you use. For example: If your average APS in the last 7 days was 200 APS, and your P90 was 500 APS, then your limit would be calculated as follows: Greater of: * Default limit of 500 APS * The lesser of: * 4 * 200 APS Mean = 800 APS * 2 * 500 APS P90 = 1000 APS This means that your default limit would be 800 APS. ![Usage graph showing increasing APS usage for one month, with occasional spikes, and a rising APS limit](/img/cloud/provisioned-capacity/usage_graph.png) ## Provisioned Capacity {#provisioned-capacity} :::tip Support, stability, and dependency info Provisioned Capacity is currently in [pre-release](/evaluate/development-production-features/release-stages#pre-release). Please contact your AE or Support to enable this feature. ::: Provisioned Capacity provides an alternative to On-Demand Capacity by allowing you to control the limits on your Namespace based on your specific need. | | Actions Per Second | Requests Per Second | Operations Per Second| |---------------|--------------------|---------------------|----------------------| | TRU | 500 | 2000 | 4000 | Customers can set 2, 3, 4, 6, 8, 10, 12 TRUs, subject to availability. TRUs can be adjusted hourly. See [Capacity Mode Pricing](/cloud/pricing#capacity-modes-pricing) for pricing implications. ### What kind of throughput can I get with Temporal Cloud with Provisioned Capacity? With Provisioned Capacity, you can set your rate limits by selecting the number of Temporal Resource Units (TRUs) on your Namespace. Each TRU supports up to 500 APS and can be provisioned in groups of 2, 3, 4, 6, 8, 10, or 12 TRUs if there is capacity available in a region. When TRUs are requested we aim to provision the additional capacity within two minutes. :::tip Large TRU requests For Requests in excess of 4 TRUs in regions outside of the US, we recommend submitting a support ticket to ensure capacity availability. ::: ### Provisioned Capacity Availability The amount of capacity available within a region may vary. Temporal will check available capacity at the time of your request and aims to provision requested capacity within two minutes. If you need capacity beyond what is self-serviceable or available in a region, please [file a support ticket](https://docs.temporal.io/cloud/support#ticketing) indicating the limit, region, and timeframe that the capacity is needed. ### When should I use Provisioned Capacity? Provisioned Capacity works well when you’re aware of specific increases in load on your Namespace. For example: * Planned events * Unplanned events/usage spikes * Known but sudden system spikes * Load testing * Migrating workloads Depending on your usage patterns and your system monitoring, you can use Provisioned Capacity to quickly remedy rate limiting without contacting support. You can also automate changes in capacity if you have a known event or a recurring usage pattern that produces predictable usage spikes. ## Setting Capacity Modes Capacity Modes and TRUs can be set via the Temporal Cloud UI, CLI, or API. Capacity modes can be set and adjusted by Global Admin and Namespace Admin. ### Setting Capacity Modes from the UI You can set Capacity Modes for an individual Namespace by navigating to the Namespace page in the Temporal Cloud UI (`https://cloud.temporal.io/namespaces/`). To view your current capacity configuration and change your capacity mode, navigate to the capacity tile and click *Manage Capacity*. ![Manage Capacity button in the Temporal UI](/img/cloud/provisioned-capacity/manage_capacity_button.png) Under *Manage Capacity* you will be able to select between *On-Demand* and *Provisioned Capacity* modes. The *On-Demand* section will display your available On-Demand capacity. The *Provisioned* section will display the limit available with selected TRUs and the Included Actions required per hour. [See details on Provisioned Capacity Pricing](/cloud/pricing#capacity-modes-pricing). To switch to Provisioned capacity: 1. Select the *Provisioned* radio button. 1. Specify the requested number of TRUs using the slider. 1. Check the dialog acknowledging potential pricing implications. 1. Click *Confirm*. In addition to the Capacity Mode selections, a summary of APS usage over the last seven days is included to help you estimate your current usage. For more detailed information, we recommend setting up metrics that track your APS and Limits. See [Monitoring Trends Against Limits](/cloud/service-health#rps-aps-rate-limits) to track usage trends. ![Manage Capacity panel in the Temporal UI](/img/cloud/provisioned-capacity/manage_capacity_panel.png) ### Setting Capacity Modes from the CLI ```command tcld capacity update --namespace --capacity-mode --capacity-value [--request–id --resource-version ] ``` Use this command to specify the Namespace name and configure the capacity settings: * `--capacity-mode` sets the billing mode for the Namespace. Use `on_demand` for automatic scaling or `provisioned` for a fixed capacity allocation. * `--capacity-value` sets the throughput value in TRUs (Temporal Resource Units). Optional flags: * `--request-id` specifies a request identifier for the asynchronous operation. If not specified, the server assigns one automatically. * `--resource-version` specifies the resource version (etag) to update from. If not set, the CLI uses the latest version. If using API key authentication with the `--api-key` flag, you must add it directly after the tcld command and before capacity update. ### Setting Capacity Modes from the API Call the `UpdateNamespace` API after Namespace creation and define the desired capacity state as part of the capacity spec. --- ## AWS PrivateLink Connectivity [AWS PrivateLink](https://aws.amazon.com/privatelink/) allows you to open a path to Temporal without opening a public egress. It establishes a private connection between your Amazon Virtual Private Cloud (VPC) and Temporal Cloud. This one-way connection means Temporal cannot establish a connection back to your service. This is useful if normally you block traffic egress as part of your security protocols. If you use a private environment that does not allow external connectivity, you will remain isolated. ## Requirements Your AWS PrivateLink endpoint must be in the same region as your Temporal Cloud namespace. If using [replication for High Availability](/cloud/high-availability), the PL connection must be in the same region as one of the replicas. AWS Cross Region endpoints are not supported. ## Creating an AWS PrivateLink connection Set up PrivateLink connectivity with Temporal Cloud with these steps: 1. Open the AWS console with the region you want to use to establish the PrivateLink. 2. Search for "VPC" in _Services_ and select the option. ![AWS console showing services, features, resources](/img/cloud/privatelink/aws-console.png) 3. Select _Virtual private cloud_ > _Endpoints_ from the left menu bar. 4. Click the _Create endpoint_ button to the right of the _Actions_ pulldown menu. 5. Under _Type_ category, select _Endpoint services that use NLBs and GWLBs_. This option lets you find services shared with you by service name. 6. Under _Service settings_, fill in the _Service name_ with the PrivateLink Service Name for the region you’re trying to connect from: :::tip PrivateLink endpoint services are regional. Individual Namespaces do not use separate services. ::: 7. Confirm your service by clicking on the _Verify service_ button. AWS should respond "Service name verified." ![The service name field is filled out and the Verify service button is shown](/img/cloud/privatelink/service-settings.png) 8. Select the VPC and subnets to peer with the Temporal Cloud service endpoint. 9. Select the security group that will control traffic sources for this VPC endpoint. The security group must accept TCP ingress traffic to port 7233 for gRPC communication with Temporal Cloud. 10. Click the _Create endpoint_ button at the bottom of the screen. If successful, AWS reports "Successfully created VPC endpoint." and lists the new endpoint. The new endpoint appears in the Endpoints list, along with its ID. ![The created endpoint appears in the Endpoints list](/img/cloud/privatelink/endpoint-created.png) 11. Click on the VPC endpoint ID in the Endpoints list to check its status. Wait for the status to be “Available”. This can take up to 10 minutes. 12. Once the status is "Available", the AWS PrivateLink is ready for use. :::caution You still need to set up private DNS or override client configuration for your clients to actually use the new PrivateLink connection to connect to Temporal Cloud. See [configure private DNS for AWS PrivateLink](#configuring-private-dns-for-aws-privatelink) ::: ![Highlighted DNS names section shows your hostname](/img/cloud/privatelink/details.png) ## Configuring Private DNS for AWS PrivateLink ### Why configure private DNS? When you connect to Temporal Cloud through AWS PrivateLink you normally must: 1. **Point your SDKs/Workers at the PrivateLink DNS name** for the VPC Endpoint (e.g., `vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com`), **and** 2. **Override the Server Name Indicator (SNI)** so that the TLS handshake still presents the public Temporal Cloud hostname (e.g., `my-namespace.my-account.tmprl.cloud`). By creating a Route 53 **private hosted zone (PHZ)** that maps the public Temporal Cloud hostname (or region hostname) to your VPC Endpoint, you can: - Keep using the standard Temporal Cloud hostnames in code and configuration. - Eliminate the need to set a custom SNI override. - Make future Endpoint rotations transparent—only the PHZ record changes. This approach is **optional**; Temporal Cloud works without it. It simply streamlines configuration and operations. If you cannot use private DNS, refer to [our guide for updating the server and TLS settings on your clients](/cloud/connectivity#update-dns-or-clients-to-use-private-connectivity). ### Prerequisites | Requirement | Notes | | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | | AWS VPC with DNS resolution and DNS hostnames enabled | _VPC console → Edit DNS settings → enable both checkboxes._ | | Interface VPC Endpoint for Temporal Cloud | Subnets must be associated with the VPC and Security Group must allow TCP ingress traffic to port 7233 from the appropriate hosts. | | Route 53 available in your AWS account | You need permission to create Private Hosted Zones and records. | | Namespace details | Needed to choose the correct override domain pattern below. | ### Choose the override domain and endpoint | Temporal Cloud setup | Use this PHZ domain | Example | | ----------------------------------------- | ---------------------------------- | ----------------------------------------------- | | Single-region namespace with mTLS auth | `.tmprl.cloud` | `payments.abcde.tmprl.cloud` ↔ `vpce-...` | | Single-region namespace with API-key auth | `.api.temporal.io` | `us-east-1.aws.api.temporal.io` ↔ `vpce-...` | | Multi-region namespace | `region.tmprl.cloud` | `aws-us-east-1.region.tmprl.cloud` ↔ `vpce-...` | ### Step-by-step instructions #### 1. Collect your PrivateLink endpoint DNS name ```bash aws ec2 describe-vpc-endpoints \ --vpc-endpoint-ids $VPC_ENDPOINT_ID \ --query "VpcEndpoints[0].DnsEntries[0].DnsName" \ --output text --- # vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com ``` Save the **`vpce-*.amazonaws.com`** value -- you will target it in the CNAME record. #### 2. Create a Route 53 Private Hosted Zone 1. Open _Route 53 → Hosted zones → Create hosted zone_. 2. Enter the domain chosen from the table above, e.g., `payments.abcde.tmprl.cloud`. 3. Type: _Private hosted zone for Temporal Cloud_. 4. Associate the hosted zone with every VPC that contains Temporal Workers and/or SDK clients. 5. Create hosted zone. #### 3. Add a CNAME record Inside the new PHZ: | Field | Value | | --------------- | ------------------------------------------------------------------------------------- | | **Record name** | the namespace endpoint (e.g., `payments.abcde.tmprl.cloud`). | | **Record type** | `CNAME` | | **Value** | Your VPC Endpoint DNS name (`vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com`) | | **TTL** | 60s is typical; 15s for MRN namespaces; adjust as needed. | #### 4. Verify DNS resolution from inside the VPC ```bash dig payments.abcde.tmprl.cloud ``` If the record resolves to the VPC Endpoint, you are ready to use Temporal Cloud without SNI overrides. ### Updating your workers/clients With private DNS in place, configure your SDKs exactly as the public-internet examples show (filling in your own namespace): ```go clientOptions := client.Options{ HostPort: "payments.abcde.tmprl.cloud:7233", Namespace: "payments", // No TLS SNI override needed } ``` The DNS resolver inside your VPC returns the private endpoint, while TLS still validates the original hostname—simplifying both code and certificate management. ## Configure Private DNS for Multi-Region Namespaces :::tip Namespaces with High Availability features and AWS PrivateLink Proper networking configuration is required for failover to be transparent to clients and workers when using PrivateLink. This page describes how to configure routing for Namespaces with High Availability features on AWS PrivateLink. ::: To use AWS PrivateLink with High Availability features, you may need to: - Override the regional DNS zone. - Ensure network connectivity between the two regions. This page provides the details you need to set this up. ### Customer side solutions When using PrivateLink, you connect to Temporal Cloud through a VPC Endpoint, which uses addresses local to your network. Temporal treats each `region.` as a separate zone. This setup allows you to override the default zone, ensuring that traffic is routed internally for the regions you’re using. A Namespace's active region is reflected in the target of a CNAME record. For example, if the active region of a Namespace is AWS us-west-2, the DNS configuration would look like this: | Record name | Record type | Value | | ----------------------------------- | ----------- | -------------------------------- | | ha-namespace.account-id.tmprl.cloud | CNAME | aws-us-west-2.region.tmprl.cloud | After a failover, the CNAME record will be updated to point to the failover region, for example: | Record name | Record type | Value | | ----------------------------------- | ----------- | -------------------------------- | | ha-namespace.account-id.tmprl.cloud | CNAME | aws-us-east-1.region.tmprl.cloud | The Temporal domain did not change, but the CNAME updated from us-west-2 to us-east-1. ### Setting up the DNS override To set up the DNS override, configure specific regions to target the internal VPC Endpoint IP addresses. For example, you might set aws-us-west-1.region.tmprl.cloud to target 192.168.1.2. In AWS, this can be done using a Route 53 private hosted zone for `region.tmprl.cloud`. Link that private zone to the VPCs you use for Workers. When your Workers connect to the Namespace, they first resolve the `..` record. This points to `.region.tmprl.cloud`, which then resolves to your internal IP addresses. Consider how you’ll configure Workers for this setup. You can either have Workers run in both regions continuously or establish connectivity between regions using Transit Gateway or VPC Peering. This way, Workers can access the newly activated region once failover occurs. ## Available AWS regions, PrivateLink endpoints, and DNS record overrides The following table lists the available Temporal regions, PrivateLink endpoints, and regional endpoints used for DNS record overrides: --- ## Google Private Service Connect Connectivity [Google Cloud Private Service Connect](https://cloud.google.com/vpc/docs/private-service-connect) allows you to open a path to Temporal without opening a public egress. It establishes a private connection between your Google Virtual Private Cloud (VPC) and Temporal Cloud. This one-way connection means Temporal cannot establish a connection back to your service. This is useful if normally you block traffic egress as part of your security protocols. If you use a private environment that does not allow external connectivity, you will remain isolated. :::warning Namespaces with High Availability features and GCP Private Service Connect Automatic failover via Temporal Cloud DNS is not currently supported with GCP Private Service Connect. If you use GCP Private Service Connect, you must manually update your workers to point to the active region's Private Service Connect endpoint when a failover occurs. ::: ## Requirements Your GCP Private Service Connect connection must be in the same region as your Temporal Cloud namespace. If using [replication for High Availability](/cloud/high-availability), the PSC connection must be in the same region as one of the replicas. ## Creating a Private Service Connect connection Set up Private Service Connect with Temporal Cloud with these steps: 1. Open the Google Cloud console 2. Navigate to **Network Services**, then **Private Service Connect**. If you haven't used **Network Services** recently, you might have to find it by clicking on **View All Products** at the bottom of the left sidebar. ![GCP console showing Network Services, and the View All Products button](/img/cloud/gcp/gcp-console.png) 3. Go to the **Endpoints** section. Click on **Connect endpoint**. ![GCP console showing the endpoints, and the Connect endpoint button](/img/cloud/gcp/connect-endpoint-button.png) 4. Under **Target**, select **Published service**, this will change the contents of the form to allow you to fill the rest as described below ![GCP console showing the endpoints, and the Connect endpoint button](/img/cloud/gcp/connect-endpoint.png) - For **Target service**, fill in the **Service name** with the Private Service Connect Service Name for the region you’re trying to connect to: :::tip GCP Private Service Connect services are regional. Individual Namespaces do not use separate services. ::: - For **Endpoint name**, enter a unique identifier to use for this endpoint. It could be for instance `temporal-api` or `temporal-api-` if you want a different endpoint per namespace. - For **Network** and **Subnetwork**, choose the network and subnetwork where you want to publish your endpoint. - For **IP address**, click the dropdown and select **Create IP address** to create an internal IP from your subnet dedicated to the endpoint. Select this IP. - Check **Enable global access** if you intend to connect the endpoint to virtual machines outside of the selected region. We recommend regional connectivity instead of global access, as it can be better in terms of latency for your workers. _**Note:** this requires the network routing mode to be set to **GLOBAL**._ 5. Click the **Add endpoint** button at the bottom of the screen. 6. [Create a Temporal Cloud Connectivity Rule](/cloud/connectivity#creating-a-connectivity-rule) using the Connection ID of the newly created endpoint and the corresponding GCP Project. 7. Once the status is "Accepted", the GCP Private Service Connect endpoint is ready for use. - Take note of the **IP address** that has been assigned to your endpoint, as it will be used to connect to Temporal Cloud. :::caution You still need to set up private DNS or override client configuration for your clients to actually use the new Private Service Connect connection to connect to Temporal Cloud. See [configuring private DNS for GCP Private Service Connect](#configuring-private-dns-for-gcp-private-service-connect) ::: ## Configuring Private DNS for GCP Private Service Connect ### Why configure private DNS? When you connect to Temporal Cloud through GCP Private Service Connect you normally must: 1. **Point your SDKs/Workers at the Private Service Connect endpoint IP address** _and_ 2. **Override the Server Name Indicator (SNI)** so that the TLS handshake still presents the public Temporal Cloud hostname (e.g., `my-namespace.my-account.tmprl.cloud`). By creating a **private Cloud DNS zone (PZ)** that maps the public TemporalC Cloud hostname (or the region hostname) directly to the PSC endpoint IP address, you can: - Keep using the standard Temporal Cloud hostnames in code and configuration. - Eliminate the need to set a custom SNI override. - Make future endpoint rotations transparent—only the DNS record changes. This approach is **optional**; Temporal Cloud works without it. It simply streamlines configuration and operations. If you cannot use private DNS, refer to [our guide for updating the server and TLS settings on your clients](/cloud/connectivity#update-dns-or-clients-to-use-private-connectivity). ### Prerequisites | Requirement | Notes | | ----------------------------------------------------- | --------------------------------------------------------------------------------- | | Google Cloud VPC Network with DNS enabled | PSC endpoints and the DNS zone must live in (or be attached to) the same network. | | Private Service Connect endpoint for Temporal Cloud | Create an endpoint and reserve an internal IP in the namespace region | | Cloud DNS API enabled and roles/dns.admin permissions | Needed to create private zones and records. | | Namespace details | Determines which hostname pattern you override (table below). | ### Choose the override domain and endpoint | Temporal Cloud setup | Use this PHZ domain | Example | | ------------------------------------------ | ---------------------------------- | ---------------------------------------------- | | Single-region namespace with mTLS auth | `.tmprl.cloud` | `payments.abcde.tmprl.cloud` ↔ `X.X.X.X` | | Single-region namespace with API-key auth | `.api.temporal.io` | `us-central1.gcp.api.temporal.io` ↔ `X.X.X.X` | | Multi-region namespace | `region.tmprl.cloud` | `gcp-us-central1.region.tmprl.cloud` ↔ `X.X.X.X` | ### Step-by-step instructions #### 1. Collect your PSC endpoint IP address ```shell --- # List the forwarding rule you created for the endpoint gcloud compute forwarding-rules list \ --filter="NAME:" \ --format="value(IP_ADDRESS)" --- # Example output: 10.1.2.3 ``` Save the internal IP -- you will point the A record at it. #### 2. Create a Cloud DNS private zone 1. Open _Network Services → Cloud DNS → Create zone_. 2. Select zone type **Private**. 3. Enter a **Zone name** (e.g., `temporal-cloud`). 4. Enter a **DNS name** based on the table above (e.g., `payments.abcde.tmprl.cloud` or `aws-us-east-1.region.tmprl.cloud`). 5. Select **Add networks** and choose the Project and Network that contains your PSC endpoint. 6. Click **Create**. #### 3. Add an A record Inside the new zone, add a _standard A record_: | Field | Value | | -------------------- | -------------------------------------------------------------- | | DNS name | the namespace endpoint (e.g. `payments.abcde.tmprl.cloud`) | | Resource record type | A | | TTL | 60s is typical, but you can adjust as needed. | | IPv4 Address | the internal IP address of your PSC endpoint (e.g. `10.1.2.3`) | #### 4. Verify DNS resolution from inside the Network ```shell dig payments.abcde.tmprl.cloud ``` If the hostname resolves to the PSC endpoint IP address from a VM in the bound network, the override is working. ### Updating your workers/clients With private DNS in place, configure your SDKs exactly as the public-internet examples show (filling in your own namespace): ```go clientOptions := client.Options{ HostPort: "payments.abcde.tmprl.cloud:7233", Namespace: "payments", // No TLS SNI override needed } ``` The DNS resolver inside your network returns the private endpoint IP address, while TLS still validates the original hostname—simplifying both code and certificate management. ## Available GCP regions, PSC endpoints, and DNS record overrides The following table lists the available Temporal regions, PrivateLink endpoints, and regional endpoints used for DNS record overrides: --- ## Connectivity ## Private network connectivity for namespaces Temporal Cloud supports private connectivity to namespaces via AWS PrivateLink or GCP Private Services Connect in addition to the default internet endpoints. Namespace access is always securely authenticated via [API keys](/cloud/api-keys#overview) or [mTLS](/cloud/certificates), regardless of how you choose to connect. For information about IP address stability and allowlisting, see [IP addresses](/cloud/connectivity/ip-addresses). ### Required steps To use private connectivity with Temporal Cloud: 1. Set up the private connection from your VPC to the region where your Temporal namespace is located. 1. Update your private DNS and/or worker configuration to use the private connection. 1. (Required to complete Google PSC setup, optional if using AWS PrivateLink): create a connectivity rule for the private connection and attach it to the target namespace(s). This will block all access to the namespace that is not over the private connection, but you can also add a public rule to also allow internet connectivity. For steps 1 and 2, follow our guides for the target namespace's cloud provider: - [AWS PrivateLink](/cloud/connectivity/aws-connectivity) creation and private DNS setup - [Google Cloud Private Service Connect](/cloud/connectivity/gcp-connectivity) creation and private DNS setup :::caution Finish client setup (complete step 2) After creating a private connection, you must set up private DNS or update the configuration of all clients you want to use the private connection. We recommend using private DNS. Without this step, your clients may connect to the namespace over the internet if they were previously using public connectivity, or they will not be able to connect at all. If that's not an option for you, refer to [our guide for updating the server and TLS settings on your clients](/cloud/connectivity#update-dns-or-clients-to-use-private-connectivity). ::: For step 3, keep reading for details on [connectivity rules](/cloud/connectivity#connectivity-rules). ## Connectivity rules :::tip Support, stability, and dependency info Connectivity rules are currently in [public preview](/evaluate/development-production-features/release-stages#public-preview). ::: :::info Web UI Connectivity The Temporal Cloud Web UI is not currently subject to connectivity rule enforcement. Even if a namespace is configured with private connectivity rules, the Web UI for that namespace remains accessible over the public internet. ::: ### Definition Connectivity rules are Temporal Cloud's mechanism for limiting the network access paths that can be used to access a namespace. By default, a namespace has zero connectivity rules, and is accessible from 1. the public internet and 2. all private connections you've configured to the region containing the namespace. Namespace access is always securely authenticated via [API keys](/cloud/api-keys#overview) or [mTLS](/cloud/certificates), regardless of connectivity rules. When you attach one or more connectivity rules to a namespace, Temporal Cloud will immediately block all traffic that does not have a corresponding connectivity rule from accessing the namespace. One namespace can have multiple connectivity rules, and may mix both public and private rules. Each connectivity rule specifies either generic public (i.e. internet) access or a specific private connection. A public connectivity rule takes no parameters. An AWS PrivateLink (PL) private connectivity rule requires the following parameters: - `connection-id`: The VPC endpoint ID of the PL connection (ex: `vpce-00939a7ed9EXAMPLE`) - `region`: The region of the PL connection, prefixed with aws (ex: `aws-us-east-1`). Must be the same region as the namespace. Refer to the [Temporal Cloud region list](/cloud/regions) for supported regions. A GCP Private Service Connect (PSC) private connectivity rule requires the following parameters: - `connection-id`: The ID of the PSC connection (ex: `1234567890123456789`) - `region`: The region of the PSC connection, prefixed with gcp (ex: `gcp-us-east1`). Must be the same region as the namespace. Refer to the [Temporal Cloud region list](/cloud/regions) for supported regions. - `gcp-project-id`: The ID of the GCP project where you created the PSC connection (ex: `my-example-project-123`) Connectivity rules can be created and managed with [tcld](https://docs.temporal.io/cloud/tcld/), [Terraform](https://github.com/temporalio/terraform-provider-temporalcloud/), or the [Cloud Ops API](/ops) ### Permissions and limits Only [Account Admins and Account Owners](/cloud/users#account-level-roles) can create and manage connectivity rules. Connectivity rules are visible to Account Developers, Account Admins, and Account Owners. By default each namespace is limited to 5 private connectivity rules, and each account is limited to 50 private connectivity rules. You can [contact support](/cloud/support#support-ticket) to request a higher limit. There is only one public rule allowed per account, because it's generic and can be reused for all namespaces that you want to be available on the internet. Trying to create more than one public rule will throw an error. ## Creating a connectivity rule ### Temporal Cloud CLI (tcld) Create private connectivity rule (AWS): ```bash tcld connectivity-rule create --connectivity-type private --connection-id "vpce-abcde" --region "aws-us-east-1" ``` Create private connectivity rule (GCP): ```bash tcld connectivity-rule create --connectivity-type private --connection-id "1234567890" --region "gcp-us-central1" --gcp-project-id "my-project-123" ``` Create public connectivity rule (you only need to do this once ever in your account): ```bash tcld connectivity-rule create --connectivity-type public ``` The `cr` alias works the same way: Private connectivity rule: ```bash tcld cr create --connectivity-type private --connection-id "vpce-abcde" --region "aws-us-east-1" ``` ```bash tcld cr create --connectivity-type public ``` ### Terraform [Examples in the Terraform repo](https://github.com/temporalio/terraform-provider-temporalcloud/blob/main/examples/resources/temporalcloud_connectivity_rule/resource.tf) ## Attach connectivity rules to a namespace Be careful! When any connectivity rules are set on a namespace, that namespace is ONLY accessible via the connections defined in those rules. If you remove a connectivity rule that your workers are using, your traffic will be interrupted. If you already have workers using a namespace, adding both a public rule and any private rules simultaneously can help you avoid unintended loss of access. You can then ensure all workers are using private connections, and then remove the public rule. ### Temporal Cloud CLI (tcld) Setting the connectivity rules on a namespace: ```bash tcld namespace set-connectivity-rules --namespace "my-namespace.abc123" --connectivity-rule-ids "rule-id-1" --connectivity-rule-ids "rule-id-2" ``` Or using aliases: ```bash tcld n scrs -n "my-namespace.abc123" --ids "rule-id-1" --ids "rule-id-2" ``` Connectivity rules are attached as a set, so if rules `rule-a`, `rule-b`, and `rule-c` were attached to a namespace and you wanted to detach `rule-c` only, you'd make one call attaching both `rule-a` and `rule-b`: ```bash tcld namespace set-connectivity-rules --namespace "my-namespace.abc123 --ids rule-a --ids rule-b ``` Remove all connectivity rules (this will make the namespace public): ```bash tcld namespace set-connectivity-rules --namespace "my-namespace.abc123" --remove-all ``` ### Terraform [Example in the Terraform repo](https://github.com/temporalio/terraform-provider-temporalcloud/tree/main/examples/resources/temporalcloud_namespace/resource.tf#L113-L128) ## View the connectivity rules for a namespace You have two ways to view the connectivity rules attached to a particular namespace. ### Get namespace Connectivity rules are included in the namespace details returned by the `namespace get` command. ```bash tcld namespace get -n "my-namespace.abc123" ``` ### List connectivity rules by namespace To see only the connectivity rules for a specific namespace (without other namespace details), use the `connectivity-rule list` command with a namespace argument. ```bash tcld connectivity-rule list -n "my-namespace.abc123" ``` ## Update DNS or clients to use private connectivity We strongly recommend using private DNS instead of updating client server and TLS settings: - [How to set up private DNS in AWS](/cloud/connectivity/aws-connectivity#configuring-private-dns-for-aws-privatelink) - [How to set up private DNS in GCP](/cloud/connectivity/gcp-connectivity#configuring-private-dns-for-gcp-private-service-connect) If you are unable to configure private DNS, you must update two settings in your Temporal clients: 1. Set the endpoint server address to the PrivateLink or Private Services Connect endpoint (e.g. `vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com:7233` or `:7233`) 2. Set TLS configuration to override the TLS server name (e.g., my-namespace.my-account.tmprl.cloud) Updating these settings depends on the client you're using. #### temporal CLI ```bash TEMPORAL_ADDRESS=vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com:7233 TEMPORAL_NAMESPACE=my-namespace.my-account TEMPORAL_TLS_CERT= TEMPORAL_TLS_KEY= TEMPORAL_TLS_SERVER_NAME=my-namespace.my-account.tmprl.cloud temporal workflow count -n $TEMPORAL_NAMESPACE ``` #### grcpurl ```bash grpcurl \ -servername my-namespace.my-account.tmprl.cloud \ -cert path/to/cert.pem \ -key path/to/cert.key \ vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com:7233 \ temporal.api.workflowservice.v1.WorkflowService/GetSystemInfo ``` #### Temporal SDKs ```go c, err := client.Dial(client.Options{ HostPort: "vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com:7233", Namespace: "namespace-name.accId", ConnectionOptions: client.ConnectionOptions{ TLS: &tls.Config{ Certificates: []tls.Certificate{cert}, ServerName: "my-namespace.my-account.tmprl.cloud", }, }, }) ``` ```java WorkflowServiceStubs service = WorkflowServiceStubs.newServiceStubs( WorkflowServiceStubsOptions.newBuilder() .setSslContext(sslContext) .setTarget("vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com:7233") .setChannelInitializer( c -> c.overrideAuthority("my-namespace.my-account.tmprl.cloud")) .build()); ``` ```ts const connection = await NativeConnection.connect({ address: "vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com:7233", tls: { serverNameOverride: "my-namespace.my-account.tmprl.cloud", //serverRootCACertificate, // See docs for other TLS options clientCertPair: { crt: fs.readFileSync(clientCertPath), key: fs.readFileSync(clientKeyPath), }, }, }); ``` ```python client_config["tls"] = TLSConfig( client_cert=bytes(crt, "utf-8"), client_private_key=bytes(key, "utf-8"), domain="my-namespace.my-account.tmprl.cloud", ) client = await Client.connect("vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com:7233") ``` ```dotnet // Create client var client = await TemporalClient.ConnectAsync( new(ctx.ParseResult.GetValueForOption(targetHostOption)!) { Namespace = ctx.ParseResult.GetValueForOption (namespaceOption)!, // Set TLS options with client certs. Note, more options could // be added here for server CA (i.e. "ServerRootCACert") or SNI // override (i.e. "Domain") for self-hosted environments with // self-signed certificates. Tls = new() { ClientCert = await File.ReadAllBytesAsync(ctx.ParseResult.GetValueForOption(clientCertOption) !.FullName), ClientPrivateKey = await File.ReadAllBytesAsync(ctx.ParseResult.GetValueFor0ption(clientKey0ption)!.FullName), Domain = "my-namespace.my-account.tmprl.cloud", }, }); // dotnet run --target-host "vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com:7233" ``` To check whether your client has network connectivity to the private endpoint in question, run: ```bash nc -zv vpce-0123456789abcdef-abc.us-east-1.vpce.amazonaws.com 7233 ``` ## Control plane connectivity Using the Temporal Cloud [web UI](/web-ui), [Terraform provider](/cloud/terraform-provider), [`tcld` CLI](/cloud/tcld), or [Cloud Ops APIs](/ops) requires network access to the Temporal Cloud control plane. ### Control plane hostnames Different hostnames are used for different parts of the service. - `saas-api.tmprl.cloud` (required for Terraform, tcld, and Cloud Ops APIs) - `web.onboarding.tmprl.cloud` (required for Web UI) - `web.saas-api.tmprl.cloud` (required for Web UI) ### AWS PrivateLink connectivity to Temporal Cloud control plane Temporal Cloud supports [AWS PrivateLink](https://aws.amazon.com/privatelink/) connections to the control plane, which allows access from applications running in VPCs that cannot egress to the public internet. Temporal Cloud does **not** support restricting an account so that private connectivity is the sole connectivity method to the control plane; the control plane is always accessible via public internet. Control plane access is always securely authenticated via [API keys](/cloud/api-keys#overview) or JWT tokens, regardless of how you choose to connect. To set up a PrivateLink connection to the Temporal Cloud control plane, follow [these instructions](/cloud/connectivity/aws-connectivity), but use the control plane endpoint information below: | Hostname | Region | Control Plane PrivateLink Service Name | | ---------------------- | ----------- | --------------------------------------------------------- | | `saas-api.tmprl.cloud` | `us-west-2` | `com.amazonaws.vpce.us-west-2.vpce-svc-0c57a5930b6f6be0e` | The control plane PrivateLink endpoint includes a [private DNS name](https://docs.aws.amazon.com/vpc/latest/privatelink/manage-dns-names.html), which lets your clients use the PrivateLink connection without having to set up private DNS or having to override client configuration. To use the DNS name, make sure your VPC has the `Enable DNS hostnames` and `Enable DNS support` options enabled. If you cannot use the DNS name, you can also manually [set up private DNS](/cloud/connectivity/aws-connectivity#configuring-private-dns-for-aws-privatelink) or [override the server and TLS settings on your clients](/cloud/connectivity#update-dns-or-clients-to-use-private-connectivity). :::caution Finish client setup After creating a private connection, you must use the provided DNS name, set up private DNS, or update the configuration of all clients you want to use the private connection. Without this step, your clients may connect to the control plane over the internet if they were previously using public connectivity, or they will not be able to connect at all. ::: The control plane is also accessible via the PrivateLink endpoint in AWS us-west-2 that can be used for namespace traffic, but we strongly recommend using the control-plane specific endpoint for control plane traffic. --- ## Temporal Cloud IP Addresses The specific IP addresses for Temporal Cloud resources are subject to change at any time. Temporal Cloud resources may use any IPs within the IP ranges published by the relevant cloud provider. If you need to limit outbound access from your client network, we recommend using [AWS PrivateLink or GCP Private Services Connect](/cloud/connectivity#private-network-connectivity-for-namespaces) instead of IP allowlisting. :::warning Do not allowlist specific IP addresses **Temporal Cloud IPs are not static and may change without notice.** Do not allowlist specific IP addresses you see Temporal Cloud services using at a point in time, as this **will cause an outage** when those IPs change. Your clients will not be able to connect to Temporal Cloud. If you have to allowlist IP ranges, you must allowlist the entire cloud provider IP range: - [AWS IP address ranges](https://ip-ranges.amazonaws.com/ip-ranges.json) - [GCP IP address ranges](https://www.gstatic.com/ipranges/cloud.json) ::: --- ## Workflow History Export Workflow History Export allows users to export closed Workflow Histories from Temporal Cloud to cloud object storage (AWS S3 or GCP GCS), enabling: - Compliance and audit trails of complete Workflow History data in [proto format](https://github.com/temporalio/api/blob/master/temporal/api/export/v1/message.proto) - Analytics on Workflow History when ingested to the data platform of your choice Workflow History Export in Temporal Cloud provides similar functionality as [Archival](/self-hosted-guide/archival) in a Self-Hosted Temporal Server. Archival is not supported in Temporal Cloud. Exports run hourly, beginning 10 minutes after the hour. Allow up to 24 hours for a closed Workflow to appear in the exported file. Delivery is guaranteed at least once. ## Prerequisites {#prerequisites} To use Workflow History Export, you must have: 1. A cloud account in the cloud provider where your Namespace is hosted. 2. An object storage bucket available to receive the exported History. ## Configure Workflow History Export {#configure} ### AWS [AWS S3 Export Configuration](/cloud/export/aws-export-s3) ### GCP [GCP GCS Export Configuration](/cloud/export/gcp-export-gcs) ## Verify export setup {#verify} From the Export configuration page, select **Verify**. This validates that Temporal can successfully write a test file to your object storage. If everything is configured correctly, you will see a `Success` status indicating Temporal has written to the object store. ## Monitor export progress {#monitor} After Export has been configured, you can check that it's still working in several ways: 1. **Object Storage**: - File Delivery: After the initial hour of setting up, inspect your object storage. You should see the exported Workflow History files. - Directory Structure: Your exported files will adhere to the following naming convention and path: ```bash //[bucket-name]/temporal-workflow-history/export/[Namespace]/[Year]/[Month]/[Day]/[Hour]/[Minute]/ ``` The exported file name will include a randomly generated ID. The time recorded in the directory structure is the time the export uploads to object storage, not the Workflow completion time. 2. **Temporal Cloud Web UI**: - Export UI: - Last Successful Export: This displays the timestamp of the most recent successful export. - Last Status Check: This reflects the timestamp of the latest internal Workflow healthcheck. - Usage Dashboard: - Actions from the Export Job are included in the [Usage Dashboard](/cloud/billing-and-cost). 3. **Metrics**: - Export-related metrics are available from the [Cloud metrics endpoint](/cloud/metrics/), specifically the metric `temporal_cloud_v1_total_action_count` with the label `is_background="true"`. 4. **Email**: - Emails are sent to `Namespace Administrator`, `Account Owner`, and `Global Administrator` roles when a Workflow History Export job fails due to a user related error (such as Object Store permissions issue). ## Working with exported files Use the proto schema defined [here](https://github.com/temporalio/api/blob/master/temporal/api/export/v1/message.proto) to deserialize exported files. ### Using exported files in analytics It can be useful to convert protos to another format to perform analytics on the data. To convert protos to parquet, follow [the example Python Workflow](https://github.com/temporalio/samples-python/tree/main/cloud_export_to_parquet). Note that this example Workflow: * Transforms the nested proto structure into a flat, tabular format. * Each row in the table represents a single history event from a Workflow. To preserve their relationship post-conversion, the `workflowID` and `runID` is included in every row. * If you have enabled the codec server, the payload field is encrypted. This field may contain characters that are not recognized when loaded into a database so the payload field is excluded in this example. ## Export and High Availability Namespaces {#export-ha} ### Export Region Persistence When Export is configured for a [High Availability](/cloud/high-availability) Namespace, the export is tied to the specific region where it was initially set up. The export configuration does not automatically failover with the Namespace. - If Export is configured in Region A, it will continue to export from Region A's storage even after a Namespace failover to Region B - Exports always read from and write to the same region where they were originally configured - The export process is independent of Namespace failover events - Export does not fail over automatically because we prioritize data completeness and consistency over real-time availability for exports. HA data replication has inherent latency, which could result in incomplete or inconsistent exports during a failover. ### Failover Scenarios **Namespace Failover with Healthy Primary Region**: When a Namespace fails over to a secondary region but the primary region remains healthy (including its blob storage), the export job continues to operate from the primary region. It does not automatically switch to export data from the secondary region. **Primary Region Outage**: If the primary region (where Export was configured) experiences a complete outage including S3/GCS storage: Exports will be unavailable until the primary region recovers. Once the primary region recovers, export will resume and include any Workflow histories that occurred during the outage. There may be delays in export processing, but the complete dataset will eventually be available. It does not automatically switch to export data from the secondary region. --- ## Exporting Workflow Event History to GCS ## Prerequisites {#prerequisites} Before configuring the Export sink, complete the following steps in Google Cloud. 1. Create a GCS bucket and take note of its bucket name, for example, "test-export" - Enable customer-managed encryption keys (CMEK) if you need additional security for your GCS bucket. - Currently, only single region buckets are supported (choose "Region" option when creating the bucket in GCS, not "Multi-region" or "Same-region") - The region of the bucket must be the same as the region of your Temporal Cloud Namespace. 2. Record the GCP Project ID that owns the bucket. 3. Create a service account in the same project that grants Temporal permission to write to your GCS bucket. 4. Follow the instructions in the Temporal Cloud UI. There are two ways to set up this service account: - Manual Setup: - Input the service account ID, GCP project ID and GCS bucket name. - Follow the instructions, manually set up a new service account. - Automated Setup: - Use the [Terraform template](https://github.com/temporalio/terraform-modules/tree/main/modules/export-sa) to create the service account. ## Configure Workflow History Export There are multiple ways to configure export: through the [Temporal Cloud UI](#using-temporal-cloud-ui), [`tcld`](#using-tcld), or [`terraform`](#using-terraform). :::note Why does Temporal Cloud provision multiple service accounts for Export? Temporal Cloud creates multiple intermediary service accounts for export operations primarily for security purposes. The system randomly selects from these accounts when writing to your storage sink, which provides several benefits: - **Security isolation**: If one service account is compromised or needs to be decommissioned, other accounts remain available - **Load distribution**: Prevents exclusively using a single account, reducing security risk - **Warm standby**: Keeps multiple accounts active to avoid potential throttling when switching between accounts - **Reliability**: Provides resilience against cloud provider account-level issues that could affect a single service account This approach prioritizes security and availability, ensuring robust export operations even if individual service accounts encounter issues. ::: ### Using Temporal Cloud UI The following steps guide you through setting up Workflow History Export using the Temporal Cloud UI. ![](/img/cloud/gcp/export-sink-ui-gcp.png) 1. In the Cloud UI, navigate to the Namespaces section. Confirm that the Export feature is visible and properly displayed. 2. Configure the Export sink for a Namespace: 1. Choose GCS as the sink type. 2. Provide the following information: 1. Name 2. Service account ID 3. GCP Project ID 4. GCS bucket name 3. After inputting the necessary values, click on **Verify**. You should be able to write to the sink successfully. If not, please fix any errors or reach out to support for help. - If you just created the GCS bucket and granted permission for your service account, it may take some time for the permission to propagate. You may need to wait up to 5 minutes before clicking the **Verify** button to verify the connection. 4. Clicking **Create** will complete the Export sink setup. 5. The page will auto-refresh and you should see the status “Enabled” on the Export screen. You are now ready to export Workflow histories. 6. You can toggle the enable button if you want to stop export and resume in the future. **Note**: when you re-enable the feature, it will start from the current point in time, and not from the time when you disabled export. 7. You can also delete export by clicking **Delete**. :::tip Don't forget to click Create at the end of your setup to confirm your export. ::: ### Using tcld To access export-related commands in tcld, please follow these steps: 1. Download the latest version of tcld from following instructions (here)[https://docs.temporal.io/cloud/tcld/#install-tcld]. 2. Make sure your tcld version is v0.35.0 or above. 3. Run the command: `tcld n export gcs`: ```bash NAME: tcld namespace export gcs - Manage GCS export sink USAGE: tcld namespace export gcs command [command options] [arguments...] COMMANDS: create, c Create export sink update, u Update export sink validate, v Validate export sink get, g Get export sink delete, d Delete export sink list, l List export sinks help, h Shows a list of commands or help for one command OPTIONS: --help, -h show help ``` 4. Run the `tcld n export gcs create` command and provide the following information: - `--namespace`: The Namespace to configure export for. - `--sink-name`: The name of the export sink. - `--service-account-email`: The service account that has access to the sink. - `--gcs-bucket`: The name of the GCP GCS bucket. For example: ```bash tcld n export gcs create -n test.ns --sink-name test-sink --service-account-email test-sink@test-export-sink.iam.gserviceaccount.com --gcs-bucket test-export-validation ``` 5. Check the status of this command by either viewing the Namespace Export status in the Temporal Cloud UI or using the following command and looking for the state of "Active": ```bash tcld n export gcs g -n test.ns --sink-name test-sink { "name": "test.ns", "resourceVersion": "b954de0c-c6ae-4dcc-90bd-3918b52c3f28", "state": "Active", "spec": { "name": "test-sink", "enabled": true, "destinationType": "Gcs", "s3Sink": null, "gcsSink": { "saId": "test-sink", "bucketName": "test-export-validation", "gcpProjectId": "test-export-sink", } }, "health": "Ok", "errorMessage": "", "latestDataExportTime": "0001-01-01T00:00:00Z", "lastHealthCheckTime": "2024-01-23T06:40:02Z" } ``` ### Using `terraform` See the [Terraform export support](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/namespace_export_sink) for setup instructions. ### Next Steps - [Verify export setup](/cloud/export#verify) - [Monitor export progress](/cloud/export#monitor) - [Work with exported files](/cloud/export#working-with-exported-files) --- ## Manage API keys Temporal Cloud API keys offer industry-standard identity-based authentication for Temporal users and [Service Accounts](/cloud/service-accounts). This document introduces Temporal Cloud's API key features: - [API key overview](#overview) - [API key best practices](#best-practices) - [Global Administrator and Account Owner API key management](#manage-api-keys) - [User API key management](#user-api-keys) - [Manage API keys for Service Accounts](#serviceaccount-api-keys) - [API keys for Namespace authentication](#namespace-authentication) - [Use API keys to authenticate](#using-apikeys) - [Troubleshoot your API key use](#troubleshooting) - [API keys: Frequently Asked Questions](#faqs) ## API key overview {#overview} Each Temporal Cloud API key is a unique identity linked to role-based access control (RBAC) settings to ensure secure and appropriate access. The authentication process follows this pathway: ## API key best practices {#best-practices} - **Keep it secret; keep it safe**: Treat your API key like a password. Do not expose it in client-side code, public repositories, or other easily accessible locations. - **Rotate keys regularly**: Change your API keys periodically to reduce risks from potential leaks. - **Design your code for key updates**: Use key management practices that retrieve your API keys without hard-coding them into your apps. This lets you restart your Workers to refresh your rotated keys without recompiling your code. - **Monitor API key usage**: Check usage metrics and logs regularly. Revoke the key immediately if you detect any unexpected or unauthorized activity. - **Use a Key Management System (KMS)**: Employ a Key Management System to minimize the risk of key leaks. ### API key use cases API keys are used for the following scenarios: - _**Cloud operations automation**_: API keys work with Temporal Cloud operational tools, including [`tcld`](/cloud/tcld), [Cloud Ops APIs](/ops), and [the Terraform provider](/cloud/terraform-provider). Use them to manage your Temporal Cloud account, Namespaces, certificates, and user identities. - _**Namespace authentication**_: API keys serve as an authentication mechanism for executing and managing Workflows via the SDK and Temporal CLI, offering an alternative to mTLS-based authentication. ### API key supported tooling Use API keys to authenticate with: - [The Temporal CLI](/cli) - [Temporal SDKs](/develop) - [`tcld`](/cloud/tcld/index.mdx) - [The Cloud Operations API](/cloud/operation-api.mdx) - [Temporalʼs Terraform provider](/cloud/terraform-provider) ### API key permissions API keys support both users and Service Accounts. Here are the differences in their permissions: - Any user can create, delete, and update their _own_ API key using the Cloud UI or `tcld`. - Only Global Administrators and Account Owners can create, delete, and update access to API keys for all types of Service Accounts. - Namespace Admins can create, delete, and update access to API keys for the Namespace-scoped Service Accounts they administer. ### API key prerequisites Check these setup details before using API keys: - The Global Administrator or Account Owner may need to [enable API keys access](#manage-api-keys) for your Temporal Account. - Have access to the [Temporal Cloud UI](https://cloud.temporal.io/) or Temporal Cloud CLI ([tcld](https://docs.temporal.io/cloud/tcld/)) to create an API key. ## Global Administrator and Account Owner API key management {#manage-api-keys} Global Administrators and Account Owners can monitor, manage, disable, and delete API keys for any user or Service Account within their account. To manage your account’s API keys: 1. [Log in](https://cloud.temporal.io/) to the Temporal Cloud UI. 1. Go to [Settings → API Keys](https://cloud.temporal.io/settings/api-keys) Administrators can disable the creation of new API keys using the **Disable Create API Keys** button on the **API Keys** Settings page. Existing API keys can still be used to authenticate into Temporal Cloud normally until they are either disabled, deleted, or expired. To disable or delete an individual API key use the vertical ellipsis menu in the API key table row. To find an API key, you can filter by API key state and identity type (Global Administrators and Account Owners only). :::caution DISABLED API KEYS Deleting or disabling a key removes its ability to authenticate into Temporal Cloud. If you delete or disable an API key being used by Workers to run a Workflow, those Workers will be unable to connect to Temporal until a new API key secret is created and configured. ::: ## User API key management {#user-api-keys} Manage your personal API keys with the Temporal Cloud UI or `tcld`. These sections show you how to generate, manage, and remove API keys for a user. ### Generate an API key Create API keys using one of the following methods: :::caution - Once generated, copy and securely save the API key. It will be displayed only once for security purposes. ::: #### Generate API keys with the Temporal Cloud UI [Log in](https://cloud.temporal.io/) to the Temporal Cloud UI and navigate to your [Profile Page → API Keys](https://cloud.temporal.io/profile/api-keys). Then select **Create API key** and provide the following information: - **API key name**: A short, identifiable name for the key - **API key description**: A longer description of the key's use - **Expiration date**: The end date for the API key Finish by selecting **Generate API Key**. #### Generate API keys with tcld To generate an API key, log into your account and issue the following command: ```command tcld login tcld apikey create \ --name \ --description "" \ --duration ``` Duration specifies the time until the API key expires, for example: "30d", "4d12h", etc. ### Enable or Disable an API key You can enable or disable API keys. When disabled, an API key cannot authenticate with Temporal Cloud. #### Manage API key state with the Temporal Cloud UI Follow these steps: 1. [Log in](https://cloud.temporal.io/) to the Temporal Cloud UI. 1. Go to your [Profile Page → API Keys](https://cloud.temporal.io/profile/api-keys). 1. Select the vertical ellipsis menu in the API key table row. 1. Choose **Enable** or **Disable**. #### Manage API Key State with tcld To manage an API key, log into your account and use one of the following commands to enable or disable it: ```command tcld login tcld apikey disable --id tcld apikey enable --id ``` ### Delete an API key Deleting an API key stops it from authenticating with Temporal Cloud. :::caution Deleting an API key used by Workers to run a Workflow will cause it to fail unless you rotate the key with a new one. This can affect long-running Workflows that outlast the API key's lifetime. ::: #### Delete API keys with the Temporal Cloud UI Follow these steps to remove API keys: 1. [Log in](https://cloud.temporal.io/) to the Temporal Cloud UI. 1. Navigate to your [Profile Page → API Keys](https://cloud.temporal.io/profile/api-keys). 1. Select the vertical ellipsis menu in the API key table row. 1. Choose **Delete**. #### Delete API keys with tcld To delete an API key, log into your account and issue the following: ```command tcld login tcld apikey delete --id ``` ### Rotate an API key Temporal API keys automatically expire based on the specified expiration time. Follow these steps to rotate API keys: 1. Create a new key. You may reuse key names if that helps. 1. Ensure that both the original key and new key function properly before moving to the next step. 1. Switch clients to load the new key and start using it. 1. Delete the old key after it is no longer in use. ## Manage API keys for Service Accounts {#serviceaccount-api-keys} Global Administrators and Account Owners can manage and generate API keys for _all_ Service Accounts in their account. Namespace Admins can manage and generate API keys for the Namespace-scoped Service Accounts they administer. This is different for non-admin users, who manage and generate their own API keys. ### Generate an API Key for a Service Account Create API keys for Service Accounts using one of the following methods: :::caution - Once generated, copy and securely save the API key. It will be displayed only once for security purposes. ::: #### Generate API Keys with the Temporal Cloud UI [Log in](https://cloud.temporal.io/) to the Temporal Cloud UI. Global Administrators or Account Owners can go to [Settings → API Keys](https://cloud.temporal.io/settings/api-keys). Namespace Admins can go to [Profile Page → API Keys](https://cloud.temporal.io/profile/api-keys). Select **Create API Key**, then choose **Service Account** from the "Create an API key for" dropdown. In the "Mapped to identity" input box, select a Service Account and provide the following information: - **API key name**: A short, identifiable name for the key - **API key description**: A longer description of the key's use - **Expiration date**: The end date for the API key Finish by selecting **Generate API Key**. #### Generate API keys with tcld To create an API key for a Service Account, use `tcld apikey create` with the `--service-account-id` flag: ``` tcld apikey create \ --name \ --description "" \ --duration \ --service-account-id ``` ### Enable or Disable an API key Global Administrators and Account Owners can manage API key access for any user in their account using the Temporal Cloud UI or `tcld`. #### Manage keys with Temporal Cloud UI Follow these steps: 1. [Log in](https://cloud.temporal.io/) to the Temporal Cloud UI. 1. Global Administrators or Account Owners can go to [Settings → API Keys](https://cloud.temporal.io/settings/api-keys). Namespace Admins can go to [Profile Page → API Keys](https://cloud.temporal.io/profile/api-keys). 1. Find the API key. Use the vertical ellipsis menu in the table row and select the Disable/Enable option to perform the action. There may be a delay after changing the status. Once successful, the updated API key status will be shown in the row. #### Manage keys with tcld Use the `tcld apikey disable` or `tcld apikey enable` command to disable or enable an API key: ``` tcld login tcld apikey disable --id tcld apikey enable --id ``` This command is the same for users and Service Accounts. ### Delete an API key for a Service Account Global Administrators and Account Owners can delete API keys for any user or Service Account in their account using the Temporal Cloud UI or `tcld`. Deleting a key removes its ability to authenticate with Temporal Cloud. If you delete an API key used by a Worker to run a Workflow, that Worker will fail to connect to Temporal server unless you rotate the API key with a new one. #### Delete a Service Account API key with Temporal Cloud UI Follow these steps: 1. Go to [Settings → API Keys](https://cloud.temporal.io/settings/api-keys). 1. Locate the API key. Use the vertical ellipsis menu in the table row and select the Delete option. There may be a delay after deleting the API key. 1. Once successful, the updated API key status will be reflected in the row. #### Delete a Service Account API key with tcld Use the `tcld apikey delete` command to delete an API key. The process for deleting an API key is the same for a user or Service Account. ``` tcld login tcld apikey delete --id ``` ### Rotate a Service Account API key Temporal API keys automatically expire based on the specified expiration time. Follow these steps to rotate API keys: 1. Create a new key. You may reuse key names if that helps. 1. Ensure that both the original key and new key function properly before moving to the next step. 1. Switch clients to load the new key and start using it. 1. Delete the old key after it is no longer in use. :::tip Service Accounts can rotate their own API keys irrespective of their configured permissions. To use this feature, have your Service Account create a new API key using the [Cloud Ops APIs](/ops) or [`tcld`](/cloud/tcld) before the current one expires. Service Accounts cannot delete their own API keys without the requisite permissions, which helps keep Workflow access secure. ::: ## API keys for Namespace authentication {#namespace-authentication} Create a Namespace with API key authentication as an alternative to mTLS-based authentication by selecting "Allow API key authentication" during setup. The gRPC endpoint format for the Namespace depends on the authentication method and whether or not High Availability features are enabled. See the following documentation for [accessing Namespaces](/cloud/namespaces#access-namespaces) for more information. :::info When switching on or off High Availability features for a Namespace, you may need to update the gRPC endpoint used by your Workers and Clients, because the Namespace endpoint changes based on whether High Availability features are enabled. See [Disable High Availability](/cloud/high-availability/enable#disable) for more information. ::: ### Without High Availability features Use the gRPC regional endpoint `..api.temporal.io:7233`. ### With High Availability features Use the gRPC Namespace endpoint: `..tmprl.cloud:7233`. This allows Workers and Clients to always connect to the active region of the Namespace. In a failover event, Temporal Cloud changes the Namespace's active region and points this Namespace endpoint to the new active region. ## Use API keys to authenticate {#using-apikeys} Authenticate with Temporal Cloud using API keys with the following clients: - [Temporal CLI](/cli) - [SDKs](/develop) - [Temporal Cloud CLI `tcld`](/cloud/tcld/index.mdx) - [The Cloud Operations API](/cloud/operation-api.mdx) - [Temporal’s Terraform Provider](/cloud/terraform-provider) ### Temporal CLI To use your API key with the Temporal CLI, either pass it with the `--api-key` flag or set an environment variable in your shell (recommended). The CLI automatically picks up the `TEMPORAL_API_KEY` environment variable from your shell. In addition to the API key, the following client options are required: - `--address`: Provide the Namespace's gRPC endpoint from the Namespace UI's gRPC endpoint box. - For API key connections, use the format `..api.temporal.io:7233`. - You can set the address using an environment variable. - `--namespace`: Provide the `namespace.accountId` from the top of the Namespace page in the UI. - Use the format `.`. - This can be set using an environment variable. For example, to connect to Temporal Cloud from the CLI using an environment variable for the API key: ```bash export TEMPORAL_API_KEY= temporal workflow list \ --address \ --namespace . ``` :::tip ENVIRONMENT VARIABLES Do not confuse environment variables, set with your shell, with temporal env options. ::: ### SDKs To use your API key with a Temporal SDK, see the instructions in each SDK section. [How to connect to Temporal Cloud using an API Key with the Go SDK](/develop/go/temporal-client#connect-to-temporal-cloud) [How to connect to Temporal Cloud using an API Key with the Java SDK](/develop/java/temporal-client#connect-to-temporal-cloud) [How to connect to Temporal Cloud using an API Key with the Python SDK](/develop/python/temporal-client#connect-to-temporal-cloud) [How to connect to Temporal Cloud using an API Key with the TypeScript SDK](/develop/typescript/temporal-client#connect-to-temporal-cloud) [How to connect to Temporal Cloud using an API Key with the .NET SDK](/develop/dotnet/temporal-client#connect-to-temporal-cloud) ### tcld To use an API key with `tcld`, choose one of these methods: - Use the `--api-key` flag. - Set the `TEMPORAL_API_KEY` environment variable in your shell. :::tip ENVIRONMENT VARIABLES Do not confuse environment variables, set with your shell, with temporal env options. ::: ### Cloud Ops API To use an API key with the [Cloud Ops API](/ops), securely pass the API key in your API client. For a complete example, see [Cloud Samples in Go](https://github.com/temporalio/cloud-samples-go/blob/1dd4254b6ed1937e361005c0144410e72b8a5542/client/api/apikey.go). ### Terraform Provider To use an API key with the [Temporal Terraform Provider](/cloud/terraform-provider), pass the API key as a provider argument. ## Troubleshoot your API key use {#troubleshooting} **Invalid API key errors**: Check that you copied the key correctly and that it hasn't been revoked or expired. ## API keys: Frequently Asked Questions {#faqs} **Q: Can I issue and use multiple API keys for the same account?** A: Yes, you can generate multiple API keys for different services or team members. **Q: How many API keys can be issued at once?** A: Up to 10 non-expired keys per user and 20 non-expired keys per Service Account. **Q: Do API keys expire?** A: Yes, API keys expire based on the specified expiration date. Temporal recommends rotating API keys periodically. **Q: Whats the maximum allowed expiration for an API key?** A: The maximum expiration time for an API key is 2 years. **Q: What happens if I misplace or lose my API bearer token/secret key?** A: The full key is displayed only once upon creation for security reasons. If you lose it, generate a new one. **Q: What is the `Generate API Key` button on the Namespace page?** A: The `Generate API Key` button on a Namespace page generates an API key with `Admin` permissions for the given Namespace and the maximum expiration time, which is 2 years. For additional details, refer to [Namespace-scoped Service Accounts](/cloud/service-accounts#scoped). --- ## Usage and Billing Management Temporal strives to provide full transparency over billing and costs. Account Owners and Finance Admins can view their [detailed billing information](https://cloud.temporal.io/billing) at any time. Use this information to assess your spending patterns, inspect your credit ledger, check your invoice histories, update payment details, and manage your current plan as needed. You can see namespace-level cost estimates on the [usage dashboard](https://cloud.temporal.io/usage). For more information on current Temporal Cloud pricing for Actions, storage, and services/support, please visit our [Pricing](/cloud/pricing) page. The [billing](https://cloud.temporal.io/billing) page includes the following information. If you're not on a standard plan, your billing page may show a subset of this list: - [Current balance](#current-balance): Your balance to date for this billing cycle - [Recent bill](#recent-bill): The amount of your most recent bill - [Invoice history](#invoice): Access to all past invoices - [Credit ledger](#credit-table): A record of all credit related transactions including details on credit grants, purchases, usage, and remaining credits, if applicable - [Plan](#plans): Your current plan, consumption pricing, and entitlements, with the ability to manage upgrades and downgrades, payment method, and account deletion The [Usage](https://cloud.temporal.io/usage) page shows the cost breakdown by Namespace. If your organization separates projects by Namespace -- for architectural reasons, for development/production differentiation, for different products, etc -- you can view individual costs for each Namespace. ## Current balance {#current-balance} Your current balance card shows the balance for your current billing cycle and the date it was last updated. This balance adjusts with use and appears on the first line of your Invoices table. :::note Billing Cycles Billing cycles normally begin on the first of the month (UTC). The minimum plan fee for your first month is prorated based on your sign-up date. ::: ## Recent bill {#recent-bill} The "Recent Bill" card displays the previous bill amount. ![Recent bill card showing a balance of $0.00](/img/cloud/billing/billing-card.png) - If you pay your invoices through Stripe, you'll see a **Pay Now** button. It takes you to the Stripe portal to complete your payment - If your account is set up for auto-payment, you don’t need manually pay bills. However, you can choose to make manual payments whenever you wish ## Invoices {#invoice} To review your invoices, follow these steps: 1. Click **Billing** on your left-side vertical navigation. 2. Under the **Invoices** section, select and download the invoice(s) you want to review. The Invoices table shows the following information: - Date (UTC): The date range covered by the invoice - Type: The type of invoice, such as credit purchase or cloud usage - Status: The current status of the invoice, such as paid or pending - Credit Granted: The total credits added to your account - Credit Purchase Amount: The amount paid for purchasing credits - Credit Usage: The credits used during the billing cycle - Subtotal: The total amount of the invoice before any adjustments - Balance Due: The amount to pay after applying credits ![Billing page showing Invoices tab](/img/cloud/billing/billing-invoices.png) You may download your Invoices prior to this calendar month by clicking the download icon by the date. :::note Current Month Invoice During the current billing period, your invoice will not be finalized and the download option will not be available. ::: ## Credits {#credit-table} The following information appears under the credits table: - Effective At (UTC): The date when the credit grant became effective - Type: Indicates whether the transaction was a deduction, expiry, or grant - Amount: The credit amount that was granted, deducted, or expired - Credits Remaining: The remaining credit available in the account ![Billing page showing Credits tab](/img/cloud/billing/billing-credits.png) ## Cost by Namespace {#cost-by-namespace} Account Owners and Finance Admins can access a cost column on the Usage page. This allows you to monitor your cost on a per Namespace basis. If your organization separates work by Namespace—for development, production, or different products—you can view costs for each. ![Billing page showing Usage](/img/cloud/billing/billing-usage.png) :::note Cost Breakdown Limitations Namespace cost details are not available for "last 90 days" or "last 120 days". Cost breakdowns distribute the total usage cost to namespaces proportionally based on their metered usage. The proration reflects your effective price, factoring in included Actions/Storage and tiered pricing rates in your Temporal plan. ::: ## Plans {#plans} Account Owners and Finance Admins can access their Temporal Plan information on the plans page. For customers on a standard agreement you will be able to: - View current plan information, pricing details and entitlements - View other available plans, pricing details and entitlements - View Pay-as-You-Go pricing rates applicable to your plan - Upgrade and Downgrade between plans available on a standard agreement ![Billing page showing Plans tab](/img/cloud/billing/billing-plans.png) Requests to upgrade your plan are processed immediately and you will be billed on a pro-rated basis for that billing period. Your monthly entitlements will reflect the full volume of included Actions and Storage of the upgrade plan for that billing month. After an upgrade, a downgrade cannot be processed until the following billing period. Requests to downgrade will be processed immediately. Billing and entitlements will be backdated to the beginning of the billing period. ## Account Cancellation The way you created your Temporal account determines how you can cancel your subscription and remove the account. - **For accounts managed by our sales team**. Please submit a support ticket so we can help you. - **For accounts created through our self-signup portal**. Account owners can delete their accounts on the Temporal Cloud Billing page, under the **Plan** tab. If you're no longer using Temporal Cloud, use the Delete Account button to begin the process. - Permanently deleted accounts will immediately cease billing and be scheduled for full deletion within 72 hours. - Account Data and Active Storage will be permanently deleted. Retained Storage will be deleted in accordance with its configured retention period. ![Billing page showing the Plan tab. The contents on the tab include "Manage Payment Method" and "Delete Account" buttons. The "Delete Account" button is placed below text asking "No longer using Temporal Cloud?"](/img/cloud/billing/billing-cancel.png) --- ## Authenticate with mTLS certificates [Temporal Cloud](https://temporal.io/cloud) supports both mTLS and [API key](/cloud/api-keys) authentication for namespace access. When using mTLS authentication, each [Worker Process](/workers#worker-process) uses a CA certificate and private key to connect to Temporal Cloud. Temporal Cloud does not require an exchange of secrets; only the certificates produced by private keys are used for verification. :::caution Don't let your certificates expire An expired root CA certificate invalidates all downstream certificates. An expired end-entity certificate prevents a [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) from connecting to a Namespace or starting a Workflow Execution. If the client is on a Worker, any current Workflow Executions that are processed by that Worker either run indefinitely without making progress until the Worker resumes or fail because of timeouts. Temporal Cloud sends [courtesy emails](/cloud/notifications#admin-notifications) prior to certificate expiry. To update certificates, see [How to add, update, and remove certificates in a Temporal Cloud Namespace](#manage-certificates). ::: All certificates used by Temporal Cloud must meet the following requirements. ## Requirements for CA certificates in Temporal Cloud {#certificate-requirements} Certificates provided to Temporal for your [Namespaces](/namespaces) _must_ meet the following requirements. ### CA certificates A CA certificate is a type of X.509v3 certificate used for secure communication and authentication. In Temporal Cloud, CA certificates are required for configuring mTLS. CA certificates _must_ meet the following criteria: - The certificates must be X.509v3. - Each certificate in the bundle must be either a root certificate or issued by another certificate in the bundle. - Each certificate in the bundle must include `CA: true`. - A certificate cannot be a well-known CA (such as DigiCert or Let's Encrypt) _unless_ the user also specifies certificate filters. - The signing algorithm must be either RSA or ECDSA and must include SHA-256 or stronger message authentication. SHA-1 and MD5 cannot be used. - The certificates cannot be generated with a passphrase. :::info A certificate bundle can contain up to 16 CA certificates. A certificate bundle can have a maximum payload size of 32 KB before base64 encoding. ::: ### End-entity certificates An end-entity certificate is a type of X.509v3 certificate used by clients to authenticate themselves. Temporal Cloud lets you limit access to specific end-entity certificates by using [certificate filters](#manage-certificate-filters). An end-entity (leaf) certificate _must_ meet the following criteria: - The certificate must be X.509v3. - Basic constraints must include `CA: false`. - The key usage must include Digital Signature. - The signing algorithm must be either RSA or ECDSA and must include SHA-256 or stronger message authentication. SHA-1 and MD5 cannot be used. When a client presents an end-entity certificate, and the whole certificate chain is constructed, each certificate in the chain (from end-entity to the root) must have a unique Distinguished Name. :::caution Distinguished Names are _not_ case sensitive; that is, uppercase letters (such as ABC) and lowercase letters (such as abc) are equivalent. ::: ## How to issue root CA and end-entity certificates {#issue-certificates} Temporal Cloud authenticates a client connection by validating the client certificate against one or more CA certificates that are configured for the specified Namespace. Choose one of the following options to generate and manage the certificates: ### Option 1: You already have certificate management infrastructure If you have existing certificate management infrastructure that supports issuing CA and end-entity certificates, export the CA and generate an end-entity certificates using your existing tools. Ensure that the CA certificate is long-lived and that the end-entity certificate expires before the CA certificate. Follow the instructions to [upload the CA certificate](/cloud/certificates#update-certificates-using-temporal-cloud-ui) and [configure your client](/cloud/certificates#configure-clients-to-use-client-certificates) with the end-entity certificate. ### Option 2: You don't have certificate management infrastructure If you don't have an existing certificate management infrastructure, we recommend using API keys for authentication instead. API keys are generally easier to manage than mTLS certs if you're not using certificate management infrastructure otherwise. If you still want to use mTLS, issue the CA and end-entity certificates using [tcld](#use-tcld-to-generate-certificates) or open source tools like OpenSSL or [step CLI](#use-step-cli-to-generate-certificates). #### Use tcld to generate certificates You can generate CA and end-entity certificates by using [tcld](/cloud/tcld). Although Temporal Cloud supports long-lived CA certificates, a CA certificate generated by [tcld](/cloud/tcld) has a maximum duration of 1 year (`-d 1y`). You must set an end-entity certificate to expire before its root CA certificate, so specify its duration appropriately. To create a new CA certificate, use `tcld gen ca`. ```sh mkdir temporal-certs cd temporal-certs tcld gen ca --org temporal -d 1y --ca-cert ca.pem --ca-key ca.key ``` The contents of the generated `ca.pem` should be pasted into the "Authentication" section of your Namespace settings page. To create a new end-entity certificate, use `tcld gen leaf`. ```sh tcld gen leaf --org temporal -d 364d --ca-cert ca.pem --ca-key ca.key --cert client.pem --key client.key ``` You can now use the generated CA certificate (`ca.pem`) with Temporal Cloud and configure your client with these certs (`client.pem`, `client.key`). Upload the contents of the `ca.pem` file to the **Authentication** section of your **Namespace** settings. Follow the instructions to [upload the CA certificate](/cloud/certificates#update-certificates-using-temporal-cloud-ui) and [configure your client](/cloud/certificates#configure-clients-to-use-client-certificates) with the end-entity certificate. #### Use step CLI to generate certificates Temporal Cloud requires client certificates for authentication and secure communication. [The step CLI](https://github.com/smallstep/cli) is a popular and easy-to-use tool for issuing certificates. Before you begin, ensure you have installed smallstep/cli by following the instructions in the [installation guide](https://github.com/smallstep/cli#installation). A Certificate Authority (CA) is a trusted entity that issues digital certificates. These certificates certify the ownership of a public key by the named subject of the certificate. End-entity certificates are issued and signed by a CA, and they are used by clients to authenticate themselves to Temporal Cloud. Create a self-signed CA certificate and use it to issue an end-entity certificate for your Temporal Cloud namespace. ##### 1. Create a Certificate Authority (CA) Create a new Certificate Authority (CA) using step CLI: ```command step certificate create "CertAuth" CertAuth.crt CertAuth.key --profile root-ca --no-password --insecure ``` This command creates a self-signed CA certificate named `CertAuth.crt` and private key `CertAuth.key`. This CA certificate will be used to sign and issue end-entity certificates. ##### 2. Set the Namespace Name Set the Namespace Name as the common name for the end-entity certificate: For Linux or macOS: ```command export NAMESPACE_NAME=your-namespace ``` For Windows: ```command set NAMESPACE_NAME=your-namespace ``` Replace `your-namespace` with the name of your Temporal Cloud namespace. ##### 3. Create and Sign an End-Entity Certificate Create and sign an end-entity certificate with a common name equal to the Namespace Name: ```command step certificate create ${NAMESPACE_NAME} ${NAMESPACE_NAME}.crt ${NAMESPACE_NAME}.key --ca CertAuth.crt --ca-key CertAuth.key --no-password --insecure --not-after 8760h ``` This command creates an end-entity certificate (`your-namespace.crt`) and private key (`your-namespace.key`) that is signed by your CA (`CertAuth`). ##### 4. (optional) Convert to PKCS8 Format for Java SDK If you are using the Temporal Java SDK, you will need to convert the PKCS1 file format to PKCS8 file format. Export the end-entity's private key to a PKCS8 file: ```command openssl pkcs8 -topk8 -inform PEM -outform PEM -in ${NAMESPACE_NAME}.key -out ${NAMESPACE_NAME}.pkcs8.key -nocrypt ``` ##### 5. Use the Certificates with Temporal Cloud You can now use the generated client certificate (`your-namespace.crt`) and the CA certificate (`CertAuth.crt`) with Temporal Cloud. Upload the contents of the `CertAuth.crt` file to the **CA Certificates** section of your **Namespace** settings. Follow the instructions to [upload the CA certificate](/cloud/certificates#update-certificates-using-temporal-cloud-ui) and [configure your client](/cloud/certificates#configure-clients-to-use-client-certificates) with the end-entity certificate. ## How to control authorization for Temporal Cloud Namespaces {#control-authorization} We recommend that an end-entity certificate be scoped to a specific Namespace to enforce the principle of least privilege. Temporal Cloud requires full CA chains, so you can achieve authorization in two ways. ### Option 1: Issue a separate root certificate for each Namespace Each certificate must belong to a chain up to the root CA certificate. Temporal uses the root CA certificate as the trusted authority for access to your Namespaces. 1. Ensure that your certificates meet the [certificate requirements](#certificate-requirements). 1. [Add client CA certificates to a Cloud Namespace](/cloud/tcld/namespace/#add). ### Option 2: Use the same root certificate for all Namespaces but create a separate certificate filter for each Namespace [How to manage certificate filters in Temporal Cloud](#manage-certificate-filters) ## How to receive notifications about certificate expiration {#expiration-notifications} To keep your Namespace secure and online, you must update the CA certificate for the Namespace _before_ the certificate expires. To help you remember to do so, Temporal Cloud sends [email notifications](/cloud/notifications#admin-notifications). ## How to handle a compromised end-entity certificate :::warning Temporal does not support or check certificate revocation lists (CRLs). Customers are expected to keep their certificates up to date. ::: The recommended approach to avoiding compromised certificates is to have short-lived end-entity certificates. A short-lived compromised certificate can be left to expire on its own. Seek guidance from your infosec team to determine an appropriate value of "short-lived" for your business. If you suspect or confirm that an end-entity certificate has been compromised, and leaving it to expire is not an option, take immediate action to secure your Temporal Cloud Namespace and prevent unauthorized access. If you're using certificate filters, you can set the filters to block a compromised certificate. Follow the instructions in [How to manage certificate filters in Temporal Cloud](#manage-certificate-filters). If you need to replace a compromised certificate manually, follow these steps: ### 1. Generate a new CA certificate Follow the instructions in [How to issue CA and end-entity certificates](#issue-certificates). All end-entity certificates that can be reached by the previous CA must be regenerated. Ensure the new CA certificate meets [the certificate requirements](#certificate-requirements). ### 2. Deploy the new CA certificate to the Namespace Follow the instructions for issuing a [new CA certificate to a Namespace](#option-1-issue-a-separate-root-certificate-for-each-namespace). Deploy the new CA certificate alongside the existing one, so you don’t lose connectivity with old end-entity certificates before the new ones are generated and deployed. ### 3. Regenerate end-entity certificates with the new CA certificate [Configure all clients](https://docs.temporal.io/cloud/certificates#configure-clients-to-use-client-certificates) (Temporal CLI, SDKs, or Workers) to use the new certificate and private key alongside the compromised key. Update the client configuration as described in [Configure clients to use client certificates](#configure-clients-to-use-client-certificates). Test the new certificate to confirm clients can connect to the Namespace without issues. ### 4. Remove the compromised CA certificate Follow the instructions in [How to add, update, and remove certificates in a Temporal Cloud Namespace](#manage-certificates) to remove the compromised CA certificate from the Namespace. ### 5. Monitor and audit After implementing the changes, monitor your Namespace for unauthorized access attempts or unusual activity in audit logs. Review your certificate management practices to identify how the compromise occurred. Consider implementing stricter controls, such as: - Limiting end-entity certificates to specific Namespaces - Rotating certificates regularly as a preventive measure - [Audit logs on a regular schedule](https://docs.temporal.io/cloud/audit-logs) ## How to add, update, and remove certificates in a Temporal Cloud Namespace {#manage-certificates} :::note To manage certificates for a Namespace, a user must have [Namespace Admin](/cloud/users#namespace-level-permissions) permission for that Namespace. ::: To manage certificates for Temporal Cloud Namespaces, use the **Namespaces** page in Temporal Cloud UI or the [tcld namespace accepted-client-ca](/cloud/tcld/namespace/#accepted-client-ca) commands. Don't let your certificates expire! Add reminders to your calendar to issue new CA certificates well before the expiration dates of the existing ones. Temporal Cloud begins sending notifications 15 days before expiration. For details, see the previous section ([How to receive notifications about certificate expiration](#expiration-notifications)). When updating CA certificates, it's important to follow a rollover process (sometimes referred to as "certificate rotation"). Doing so enables your Namespace to serve both CA certificates for a period of time until traffic to your old CA certificate ceases. This prevents any service disruption during the rollover process. {/* How to update certificates in Temporal Cloud using Temporal Cloud UI */} ### Update certificates using Temporal Cloud UI Updating certificates using the following strategy allows for a zero-downtime rotation of certificates. 1. On the left side of the window, select **Namespaces**. 2. Select the name of the Namespace to update. 3. In the top-right portion of the page for the Namespace, select **Edit**. 4. On the **Edit** page, select the **Authentication** card to expand it. 5. In the certificates box, scroll to the end of the existing certificate (that is, past `-----END CERTIFICATE-----`). 6. On the following new line, paste the entire PEM block of the new certificate. 7. Select **Save**. 8. Wait until all Workers are using the new certificate. 9. Return to the **Edit** page of the Namespace and select the **Authentication** card. 10. In the certificates box, delete the old certificate, leaving the new one in place. 11. Select **Save**. {/* How to update certificates in Temporal Cloud using tcld */} ### Update certificates using tcld Updating certificates using the following strategy allows for a zero-downtime rotation of certificates. 1. Create a single file that contains both your old and new CA certificate PEM blocks. Just concatenate the PEM blocks on adjacent lines. ``` -----BEGIN CERTIFICATE----- ... old CA cert ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... new CA cert ... -----END CERTIFICATE----- ``` 1. Run the `tcld namespace accepted-client-ca set` command with the CA certificate bundle file. ```bash tcld namespace accepted-client-ca set --ca-certificate-file ``` 1. Monitor traffic to your old certificate until it ceases. 1. Create another file that contains only the new CA certificate. 1. Run the `tcld namespace accepted-client-ca set` command again with the updated CA certificate bundle file. ## How to manage certificate filters in Temporal Cloud {#manage-certificate-filters} To limit access to specific [end-entity certificates](#end-entity-certificates), create certificate filters. Each filter contains values for one or more of the following fields: - commonName (CN) - organization (O) - organizationalUnit (OU) - subjectAlternativeName (SAN) Corresponding fields in the client certificate must match every specified value in the filter. The values for the fields are case-insensitive. If no wildcard is used, each specified value must match its field exactly. To match a substring, place a single `*` wildcard at the beginning or end (but not both) of a value. You cannot use a `*` wildcard by itself. You can create a maximum of 25 certificate filters in a Namespace. If you provide a well-known CA certificate, you cannot clear a certificate filter. A well-known CA certificate is one that is typically included in the certificate store of an operating system. **Examples** In the following example, only the CN field of the certificate's subject is checked, and it must be exactly `code.example.com`. The other fields are not checked. ```json AuthorizedClientCertificate { CN : "code.example.com" } ``` In the following example, the CN field must be `stage.example.com` and the O field must be `Example Code Inc.` ```json AuthorizedClientCertificate { CN : "stage.example.com" O : "Example Code Inc." } ``` When using a `*` wildcard, the following values are valid: - `*.example.com` matches `code.example.com` and `text.example.com`. - `Example Code*` matches `Example Code` and `Example Code Inc`. The following values are not valid: - `.example.*` - `code.*.com` - `*` {/* How to manage certificate filters in Temporal Cloud using Temporal Cloud UI */} ### Manage certificate filters using Temporal Cloud UI To add or remove a certificate filter, follow these steps: 1. On the left side of the window, click **Namespaces**. 1. On the **Namespaces** page, click the name of the Namespace to manage. 1. On the right side of the page for the selected Namespace, click **Edit**. 1. On the **Edit** page, click the **Authentication** card. - To add a certificate filter, click **Add a Certificate Filter** and enter values in one or more fields. - To remove a certificate filter, click the **×** in the upper-right corner of the filter details. 1. To cancel your changes, click **Back to Namespace**. To save your changes, click **Save**. {/* How to manage certificate filters in Temporal Cloud using tcld */} ### Manage certificate filters using tcld To set or clear certificate filters, use the following [tcld](/cloud/tcld) commands: - [tcld namespace certificate-filters import](/cloud/tcld/namespace/#import) - [tcld namespace certificate-filters clear](/cloud/tcld/namespace/#clear) To view the current certificate filters, use the [tcld namespace certificate-filters export](/cloud/tcld/namespace/#export) command. ## Configure clients to use client certificates {#configure-clients-to-use-client-certificates} - [Go SDK](/develop/go/temporal-client#connect-to-temporal-cloud) - [Java SDK](/develop/java/temporal-client#connect-to-temporal-cloud) - [PHP SDK](/develop/php/temporal-client#connect-to-a-dev-cluster) - [Python SDK](/develop/python/temporal-client#connect-to-temporal-cloud) - [TypeScript SDK](/develop/typescript/temporal-client#connect-to-temporal-cloud) - [.NET SDK](/develop/dotnet/temporal-client#connect-to-temporal-cloud) ### Configure Temporal CLI {#configure-temporal-cli} To connect to a Temporal Namespace using the Temporal CLI and certificate authentication, specify your credentials and the TLS server name: ```sh temporal \ --tls-ca-path \ --tls-cert-path \ --tls-key-path \ --tls-server-name ``` For more information on Temporal CLI environment variables, see [Environment variables](/cli#environment-variables). --- ## Get started with Temporal Cloud Getting started with Temporal Cloud involves a few key steps: 1. [Sign up for Temporal Cloud](#sign-up-for-temporal-cloud) 1. [Create a Namespace](#create-a-namespace) 1. [Set up your Clients and Workers](#set-up-your-clients-and-workers) 1. [Run your first Workflow](#run-your-first-workflow) 1. [Invite your team](#invite-your-team) ## Sign up for Temporal Cloud To create a Temporal Cloud account, you can: - Sign up [directly](https://temporal.io/get-cloud); or - Subscribe at the AWS Marketplace for [Temporal Cloud Pay-As-You-Go](https://aws.amazon.com/marketplace/pp/prodview-xx2x66m6fp2lo). Signing up through the AWS Marketplace is similar to signing up directly on the Temporal Cloud site, but billing goes through your AWS account. - To purchase Temporal Cloud on the Google Cloud Marketplace, please contact our team at sales@temporal.io. For information about Temporal Cloud Pricing, see our [Pricing Page](/cloud/pricing). ## Create a Namespace See [Managing Namespaces](/cloud/namespaces#create-a-namespace) to create your first namespace. Temporal Cloud supports either [API key](/cloud/api-keys) or [mTLS](/cloud/certificates) authentication for each namespace. If you're not sure which to use, we recommend using [API keys](/cloud/api-keys) because they're easier to manage and rotate for most teams. If your organization already has private key infrastructure (PKI) and is familiar with cert management, then [mTLS](/cloud/certificates) is an excellent choice. ## Set up your Clients and Workers See our guides for connecting each SDK to your Temporal Cloud Namespace: - [Connect to Temporal Cloud in Go](/develop/go/temporal-client#connect-to-temporal-cloud) - [Connect to Temporal Cloud in Java](/develop/java/temporal-client#connect-to-temporal-cloud) - [Connect to Temporal Cloud in Python](/develop/python/temporal-client#connect-to-temporal-cloud) - [Connect to Temporal Cloud in TypeScript](/develop/typescript/core-application#connect-to-temporal-cloud) - [Connect to Temporal Cloud in .NET](/develop/dotnet/temporal-client#connect-to-temporal-cloud) - [Connect to Temporal Cloud in PHP](/develop/php/temporal-client#connect-to-temporal-cloud) - [Connect to Temporal Cloud in Ruby](/develop/ruby/temporal-client#connect-to-temporal-cloud) ## Run your first Workflow See our guides for starting a workflow using each SDK: - [Start a workflow in Go](/develop/go/temporal-client#start-workflow-execution) - [Start a workflow in Java](/develop/java/temporal-client#start-workflow-execution) - [Start a workflow in Python](/develop/python/temporal-client#start-workflow-execution) - [Start a workflow in TypeScript](/develop/typescript/core-application#start-workflow-execution) - [Start a workflow in .NET](/develop/dotnet/temporal-client#start-workflow) - [Start a workflow in PHP](/develop/php/temporal-client#start-workflow-execution) - [Start a workflow in Ruby](/develop/ruby/temporal-client#start-workflow) ## Invite your team See [Managing users](/cloud/users) to add other users and assign them roles. You can also use [Service Accounts](/cloud/service-accounts) to represent machine identities. Since you created the account when you signed up, your email address is the first [Account Owner](/cloud/users#account-level-roles) for your account. --- ## Namespaces :::info Temporal Cloud This page covers namespace operations in **Temporal Cloud**. For core namespace concepts, see [Temporal Namespace](/namespaces). For open source Temporal, see [Managing Namespaces](/self-hosted-guide/namespaces). ::: A Namespace is a unit of isolation within Temporal Cloud, providing security boundaries, Workflow management, unique identifiers, and gRPC endpoints in Temporal Cloud. - [Create a Namespace](#create-a-namespace) - [Access a Namespace](#access-namespaces) - [Manage Namespaces](#manage-namespaces) - [Delete a Namespace](#delete-a-namespace) - [Tag a Namespace](#tag-a-namespace) ## What is a Cloud Namespace Name? {#temporal-cloud-namespace-name} A Cloud Namespace Name is a customer-supplied name for a [Namespace](/namespaces) in Temporal Cloud. Each Namespace Name, such as `accounting-production`, is unique within the scope of a customer's account. It cannot be changed after the Namespace is provisioned. Each Namespace Name must conform to the following rules: - A Namespace Name must contain at least 2 characters and no more than 39 characters. - A Namespace Name must begin with a letter, end with a letter or number, and contain only letters, numbers, and the hyphen (-) character. - Each hyphen (-) character must be immediately preceded _and_ followed by a letter or number; consecutive hyphens are not permitted. - All letters in a Namespace Name must be lowercase. ## What is a Temporal Cloud Account ID? {#temporal-cloud-account-id} A Temporal Cloud Account ID is a unique customer identifier assigned by Temporal Technologies. Each Id is a short string of numbers and letters like `f45a2`, at least five characters long. This account identifier is retained throughout the time each customer uses Temporal Cloud. At times you may need to know your customer Account ID. Accessing the account's Namespaces provides an easy way to capture this information. Each Temporal Namespace use an Account ID suffix. This is the alphanumeric character string found after the period in any Temporal Cloud Namespace name. You can retrieve an Account ID from the [Temporal Cloud](https://cloud.temporal.io) Web UI or by using the `tcld` utility at a command line interface (CLI). Follow these steps. Follow these steps to retrieve your Account ID: 1. Log into Temporal Cloud. 1. Select your account avatar at the top right of the page. A profile dropdown menu appears. 1. Copy the Cloud Account ID from the menu. In this example, the Account ID is `123de`. 1. Use the `tcld` utility to log into an account. ``` tcld login ``` The `tcld` output presents a URL with an activation code at the end. Take note of this code. The utility blocks until the login/activation process completes. ``` Login via this url: https://login.tmprl.cloud/activate?user_code=KTGC-ZPWQ ``` A Web page automatically opens for authentication in your default browser. 1. Visit the browser. Ensure the user code shown by the CLI utility matches the code shown in the Web browser. Then, click Confirm in the browser to continue. After confirmation, Web feedback lets you know that the CLI "device" is now connected. 1. Return to the command line. Issue the following command. ``` tcld namespace list ``` The CLI tool returns a short JSON packet with your namespace information. This is the same list found in the Temporal Cloud Web UI Namespaces list. Like the browser version, each Namespace uses an Account ID suffix. ``` { "namespaces": [ "your-namespace.123de", "another-namespace.123de" ], "nextPageToken": "" } ``` Each Namespace automatically appends an Account ID suffix to its customer-supplied identifier. This five-character-or-longer string appears after the name, separated by a period. In this Namespace listing sample, the Account ID is 123de. ## What is a Cloud Namespace Id? {#temporal-cloud-namespace-id} A Cloud Namespace Id is a globally unique identifier for a [Namespace](/namespaces) in Temporal Cloud. A Namespace Id is formed by concatenating the following: 1. A [Namespace Name](#temporal-cloud-namespace-name) 1. A period (.) 1. The [Account ID](#temporal-cloud-account-id) to which the Namespace belongs For example, for the Account ID `123de` and Namespace Name `accounting-production`, the Namespace Id is `accounting-production.123de`. ## What is a Cloud gRPC Endpoint? {#temporal-cloud-grpc-endpoint} Temporal Clients communicate between application code and a Temporal Server by sending and receiving messages via the gRPC protocol. gRPC is a Remote Procedure Call framework featuring low latency and high performance. gRPC provides Temporal with an efficient, language-agnostic communication framework. Every Temporal Namespace uses a gRPC endpoint for communication. When migrating to Temporal Cloud, you'll need to switch the gRPC endpoint in your code from your current hosting, whether self-hosted or locally-hosted, to Temporal Cloud. A gRPC endpoint appears on the detail page for each Cloud Namespace. Follow these steps to find it: 1. Log into your account on [cloud.temporal.io](https://cloud.temporal.io/namespaces). 2. Navigate to the Namespace list page from the left-side vertical navigation. 3. Tap or click on the Namespace Name to select and open the page for the Namespace whose endpoint you want to retrieve. 4. On the Namespace detail page, click on the "Connect" button in the top right corner of the page. 5. Click the copy icon next to the gRPC address to copy it to your clipboard. See [How to access a Namespace in Temporal Cloud](/cloud/namespaces/#access-namespaces) for more information on different gRPC endpoint types and how to access them. ## How to create a Namespace in Temporal Cloud {#create-a-namespace} :::info The user who creates a [Namespace](/namespaces) is automatically granted [Namespace Admin](/cloud/users#namespace-level-permissions) permission for that Namespace. To create a Namespace, a user must have the Developer, Account Owner, or Global Admin account-level [Role](/cloud/users#account-level-roles). ::: :::tip By default, each account is allocated with a limit of ten Namespaces. As you start using Namespaces by scheduling Workflows, Temporal Cloud automatically raises your allowance. This automatic adjustment happens whenever all your Namespaces are in use, up to a maximum of 100 Namespaces. You can request further increases beyond the 100 Namespace limit by opening a [support ticket](/cloud/support#support-ticket). ::: ### Information needed to create a Namespace To create a Namespace in Temporal Cloud, gather the following information: - [Namespace Name](/cloud/namespaces#temporal-cloud-namespace-name), region, and Cloud Provider - [Retention Period](/temporal-service/temporal-server#retention-period) for the [Event History](/workflow-execution/event#event-history) of closed [Workflow Executions](/workflow-execution). - [CA certificate](/cloud/certificates#certificate-requirements) for the Namespace, if you are using mTLS authentication. - [Codec Server endpoint](/production-deployment/data-encryption#set-your-codec-server-endpoints-with-web-ui-and-cli) to show decoded payloads to users in the Event History for Workflow Executions in the Namespace. For details, see [Securing your data](/production-deployment/data-encryption). - [Permissions](/cloud/users#namespace-level-permissions) for each user. ### Create a Namespace using Temporal Cloud UI 1. Gather the information listed earlier in [Information needed to create a Namespace](#information-needed-to-create-a-namespace). 1. Go to the Temporal Cloud UI and log in. 1. On the left side of the window, click **Namespaces**. 1. On the **Namespaces** page, click **Create Namespace** in the upper-right portion of the window. 1. On the **Create Namespace** page in **Name**, enter the Namespace Name. 1. In **Cloud Provider**, select the cloud provider in which to host this Namespace. 1. In **Region**, select the region in which to host this Namespace. 1. In **Retention Period**, specify a value from 1 to 90 days. When choosing this value, consider your needs for Event History versus the cost of maintaining that Event History. Typically, a development Namespace has a short retention period and a production Namespace has a longer retention period. (If you need to change this value later, contact [Temporal Support](/cloud/support#support-ticket).) 1. Select your authentication method: [API keys](/cloud/api-keys) or [mTLS](/cloud/certificates). 1. If using mTLS authentication, paste the CA certificate for this Namespace. 1. Optional: In **Codec Server**, enter the HTTPS URL (including the port number) of your Codec Server endpoint. You may also enable "Pass the user access token with your endpoint" and "Include cross-origin credentials." For details, see [Hosting your Codec Server](/production-deployment/data-encryption#set-your-codec-server-endpoints-with-web-ui-and-cli). 1. Click **Create Namespace**. See the [`tcld` namespace create](/cloud/tcld/namespace/#create) command reference for details. ## What are some Namespace best practices? {#best-practices} This section provides general guidance for organizing [Namespaces](/namespaces) across use cases, services, applications, or domains. Temporal Cloud provides Namespace–as-a-service, so the Namespace is the endpoint. Customers should consider not only a Namespace naming convention but also how to group or isolate workloads using the Namespace as a boundary. Each team can have their own Namespace for improved modularity, security, debugging, and fault isolation. Namespaces contain the blast radius of misbehaving Workers that may exhaust rate limits. Sensitive Workflow state (PCI data) can be secured with per-Namespace permissions and encrypted with a separate encryption key. Temporal Applications in different Namespaces may be connected with [Nexus](/cloud/nexus) by exposing a clean service contract for others to use with built-in [Nexus access controls](/cloud/nexus/security). Nexus supports cross-team, cross-domain, multi-region, and multi-cloud use cases. ### Constraints and limitations Before considering an appropriate Namespace configuration, you should be aware of the following constraints: - By default, each account is allocated with a limit of ten Namespaces. As you create and use your Namespaces, for example by scheduling Workflows, Temporal Cloud automatically raises your limit. Our service identifies your usage patterns. It responds by slowly increasing your allowance, up to 100 Namespaces. You can request further increases beyond the 100 Namespace limit by opening a [support ticket](/cloud/support#support-ticket). - Each Namespace has a rate limit, which is measured in Actions per second (APS). A namespace may be throttled when its throughput becomes too high. Throttling means limiting the rate at which actions are performed to prevent overloading the system. A Namespace's default limit is set at 400 APS and automatically adjusts based on recent usage (over the prior 7 days). Your APS limit will never fall below this default value. - Each Namespace has a default service-level agreement (SLA) of 99.9% uptime. - You can opt-in to using [High Availability features](https://docs.temporal.io/cloud/high-availability) with a 99.99% contractual SLA. - A Namespace is a security isolation boundary. Access to Temporal by [Worker Processes](/workers#worker-process) is permitted at the Namespace level. Isolating applications or environments (development, test, staging, production) should take this into consideration. - A Namespace is provisioned with an endpoint for executing your Workflows. Accessing a Namespace from a Temporal Client requires [API keys](/cloud/api-keys) or [mTLS](/cloud/certificates) authentication. - [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)uniqueness is per Namespace. - [Task Queue](/task-queue) names are unique per Namespace. - Closed Workflow retention is per Namespace. - RBAC [permissions](/cloud/users#namespace-level-permissions) are implemented at the Namespace level. ### General guidance Namespace configuration requires some consideration. Following are some general guidelines to consider. - Namespaces are usually defined per use case. A use case can encompass a broad range of Workflow types and a nearly unlimited scale of concurrent [Workflow Executions](/workflow-execution). - Namespaces can be split along additional boundaries such as service, application, domain or even sub-domain. - Environments such as production and development usually have requirements for isolation. We recommend that each environment has its own Namespace. - Namespaces should be used to reduce the "blast radius" for mission-critical applications. - Workflows that need to communicate with each other should (for now) be in the same Namespace. - If you need to share Namespaces across team or domain boundaries, be sure to ensure the uniqueness of Workflow Ids. ### Examples Following are some ideas about how to organize Namespaces. #### Example 1: Namespace per use case and environment We recommend using one Namespace for each use case and environment combination for simple configurations in which multiple services and team or domain boundaries don't exist. Sample naming convention:
<use-case>_<environment>
#### Example 2: Namespace per use case, service, and environment We recommend using one Namespace for each use case, service, and environment combination when multiple services that are part of same use case communicate externally to Temporal via API (HTTP/gRPC). Sample naming convention:
<use-case>_<service>_<environment>
#### Example 3: Namespace per use case, domain, and environment We recommend using one namespace per use case, domain, and environment combination when multiple services that are part of the same use case need to communicate with each another via [Signals](/sending-messages#sending-signals) or by starting [Child Workflows](/child-workflows). In this case, though, you must be mindful about Workflow Id uniqueness by prefixing each Workflow Id with a service-specific string. The name of each Task Queue must also be unique. If multiple teams are involved, the domain could also represent a team boundary. Sample naming convention:
<use-case>_<domain>_<environment>
Sample workflowId convention:
<service-string>_<workflow-id>
## How to access a Namespace in Temporal Cloud {#access-namespaces} {/* How to access a Namespace in Temporal Cloud */} Temporal Cloud normally supports authentication to Namespaces using [API keys](/cloud/api-keys) _or_ [mTLS](/cloud/certificates). If you need to migrate from one authentication method to another, or you require both API key and mTLS authentication to be enabled on your Namespace, please contact [Support](https://docs.temporal.io/cloud/support#support-ticket). :::info Namespace authentication requiring both API key and mTLS is in [pre-release](/evaluate/development-production-features/release-stages), and doesn't support [High Availability features](/cloud/high-availability). ::: See the documentation for [API keys](/cloud/api-keys) and [mTLS certificates](/cloud/certificates) for more information on how to create and manage your credentials. Programmatically accessing your Namespace requires specific endpoints based on your authentication method. There are two types of gRPC endpoints for accessing a Namespace in Temporal Cloud: - A namespace endpoint (`..tmprl.cloud:7233`) - A regional endpoint (`..api.temporal.io:7233`). Which one to use depends on your authentication method and whether your Namespace has [High Availability features](/cloud/high-availability) enabled, as shown in the table below. | | Not High Availability | High Availability | | ---------------------- | --------------------- | ---------------------------------------------------------------------------------------------------------------------------- | | mTLS Authentication | Namespace | Namespace | | API Key Authentication | Regional | Both work, but we reommend using the Namespace endpoint because it reduces the unavailability window during a failover event | :::info When switching on or off High Availability features for a Namespace, you may need to update the gRPC endpoint used by your clients and Workers, because the Namespace endpoint changes based on whether High Availability features are enabled. See [Disable High Availability](/cloud/high-availability/enable#disable) for more information. ::: For information on how to connect to Clients using a specific authentication method see the following documentation. - To use API keys to connect with the [Temporal CLI](/cli), [Client SDK](/develop), [tcld](/cloud/tcld), [Cloud Ops API](/ops), and [Terraform](/cloud/terraform-provider), see [Use API keys to authenticate](/cloud/api-keys#using-apikeys). - To use mTLS to connect with the [Temporal CLI](/cli) and [Client SDK](/develop), see [Configure Clients to use Client certificates](/cloud/certificates#configure-clients-to-use-client-certificates). For accessing the Temporal Web UI, use the HTTPS endpoint in the form: `https://cloud.temporal.io/namespaces/.`. For example: `https://cloud.temporal.io/namespaces/accounting-production.f45a2`. To ensure the security of your data, all traffic to and from your Namespace is encrypted. However, for enhanced protection, you have additional options: - (Recommended) Set up private connectivity by [creating a ticket for Temporal Support](/cloud/support#support-ticket). - Set up your allow list for outgoing network requests from your Clients and Workers with the IP address ranges of the Cloud Provider region in which your Namespace is located: - [AWS IP address ranges](https://docs.aws.amazon.com/vpc/latest/userguide/aws-ip-ranges.html) - [GCP IP address ranges](https://cloud.google.com/compute/docs/faq#find_ip_range) ## How to manage Namespaces in Temporal Cloud {#manage-namespaces} {/* How to manage Namespaces in Temporal Cloud using Temporal Cloud UI */} ### Manage Namespaces in Temporal Cloud using Temporal Cloud UI To list Namespaces: - On the left side of the window, select **Namespaces**. To edit a Namespace (including custom Search Attributes, certificates, certificate filters, Codec Server endpoint, permissions, and users), find the Namespace and do either of the following: - On the right end of the Namespace row, select the three vertical dots (⋮). Click **Edit**. - Select the Namespace name. In the top-right portion of the page, select **Edit**. On the **Edit** page, you can do the following: - Add a [custom Search Attribute](/search-attribute#custom-search-attribute). - [Manage CA certificates](/cloud/certificates). - [Manage certificate filters](/cloud/certificates#manage-certificate-filters-using-temporal-cloud-ui). - Set [Codec Server endpoint](/production-deployment/data-encryption#set-your-codec-server-endpoints-with-web-ui-and-cli) for all users on the Namespace. Each user on the Namespace has the option to [override this setting](/production-deployment/data-encryption#web-ui) in their browser. - Manage [Namespace-level permissions](/cloud/users#namespace-level-permissions). - Add users. To add a user to a Namespace, scroll to the bottom of the page and select **Add User**. After you make changes, select **Save** in the top-right or bottom-left portion of the page. {/* How to manage Namespaces in Temporal Cloud using tcld */} ### Manage Namespaces in Temporal Cloud using tcld To list Namespaces and get information about them, use the following [tcld](/cloud/tcld/) commands: - [tcld namespace list](/cloud/tcld/namespace/#list) - [tcld namespace get](/cloud/tcld/namespace/#get) To manage certificates, use the [tcld namespace accepted-client-ca](/cloud/tcld/namespace/#accepted-client-ca) commands. For more information, see [How to manage certificates in Temporal Cloud](/cloud/certificates). To manage certificate filters, use the [tcld namespace certificate-filters](/cloud/tcld/namespace/#certificate-filters) commands. For more information, see [How to manage certificate filters in Temporal Cloud](/cloud/certificates#manage-certificate-filters). ## How to delete a Namespace in Temporal Cloud {#delete-a-namespace} :::info To delete a Namespace, a user must have Namespace Admin [permission](/cloud/users#namespace-level-permissions) for that Namespace. ::: ### Delete a Namespace using Temporal Cloud UI 1. Go to the Temporal Cloud UI and log in. 1. On the left side of the window, select **Namespaces**. 1. On the **Namespaces** page, select a Namespace Name. 1. On the Namespace page, select **Edit** in the upper-right portion of the window. 1. On the **Edit** Namespace page, select **Delete Namespace** in the upper-right portion of the window. 1. In the **Delete Namespace** dialog, type `DELETE` to confirm the deletion of that Namespace. 1. Select **Delete**. After deleting a Temporal Cloud Namespace, the Temporal Service immediately removes the Namespace's Workflow Executions and Task Queues. Make sure all Workflows have been completed, canceled, or terminated before removing a Namespace. The Namespace removal is permanent. Closed Workflow Histories remain in Temporal storage until the user-defined retention period expires. This period reflects the policy in effect when the Workflow Execution was closed. For further questions or concerns, contact [Support](https://docs.temporal.io/cloud/support#support-ticket). ### Delete a Namespace using tcld See the [tcld namespace delete](/cloud/tcld/namespace/#delete) command reference for details. ### Namespace deletion protection {#delete-protection} To prevent accidental Namespace deletion, Temporal Cloud provides a protection feature. When you enable Deletion Protection for your production environment Namespace, you ensure that critical data won't be deleted unintentionally. Follow these steps: - Visit the [Namespaces page](https://cloud.temporal.io/namespaces) on Temporal Cloud. - Open your Namespace details page. - Select the Edit button. - Scroll down to Security and click the disclosure button (downward-facing caret). - Enable **Deletion Protection** To enable or disable this feature using [`tcld`](/cloud/tcld), use the following command. Set the value to `true` to enable or `false` to disable: ``` tcld namespace lifecycle set \ --namespace \ --enable-delete-protection ``` ## How to tag a Namespace in Temporal Cloud {#tag-a-namespace} Tags are key-value metadata pairs that can be attached to namespaces in Temporal Cloud to help operators organize, track, and manage namespaces more easily. ### Tag Structure and Limits - Each namespace can have a maximum of 10 tags - Each key must be unique for a given namespace (e.g., a namespace cannot have both `team:foo` and `team:bar` tags) - Keys and values must be 1-63 characters in length - Allowed characters: lowercase letters (`a-z`), numbers (`0-9`), periods (`.`), underscores (`_`), hyphens (`-`), and at signs (`@`) - Tags are not a secure storage mechanism and should not store PII or PHI - Tags will not change the behavior of the tagged resource - There is a soft limit of 1000 unique tag keys per account ### Permissions - Only [**Account Admins** and **Account Owners**](/cloud/users#account-level-roles) can create and edit tags - All users with access to a namespace can view its tags ### tcld See the [tcld namespace tags](/cloud/tcld/namespace/#tags) command reference for details. ### Terraform See the [Terraform provider](https://github.com/temporalio/terraform-provider-temporalcloud/blob/main/docs/resources/namespace_tags.md) for details. ### Web UI Tags can be viewed and managed through the Temporal Cloud web interface. When viewing a namespace, you'll see tags displayed and can add, edit, or remove them if you have the appropriate permissions. --- ## Manage service accounts Temporal Cloud provides Account Owner and Global Admin [roles](/cloud/users#account-level-roles) with the option to create machine identities named Service Accounts. Service Accounts are a type of identity in Temporal Cloud. Temporal Cloud supports User identities as a representation of a human user who uses Temporal Cloud. Service Accounts afford Temporal Cloud Account Owner and Global Admin [roles](/cloud/users#account-level-roles) the ability to create an identity for machine authentication, an identity not associated with a human user. With the addition of Service Accounts, Temporal Cloud now supports 2 identity types: - Users (tied to a human, identified by email address or ID) - Service Accounts (not tied to a human, email address optional, identified by name or ID) Service Accounts use API Keys as the authentication mechanism to connect to Temporal Cloud. You should use Service Accounts to represent a non-human identity when authenticating to Temporal Cloud for operations automation or the Temporal SDKs and the Temporal CLI for Workflow Execution and management. :::tip Namespace Admins can now manage and create [Namespace-scoped Service Accounts](/cloud/service-accounts#scoped), regardless of their Account Role. ::: ## Manage Service Accounts Account Owner and Global Admin [roles](/cloud/users#account-level-roles) can manage Service Accounts by creating, viewing, updating, deleting Service Accounts using the following tools: - Temporal Cloud UI - Temporal Cloud CLI (tcld) - Use `tcld service-account --help` for a list of all service-account commands Account Owner and Global Admin [roles](/cloud/users#account-level-roles) also have the ability to manage API Keys for Service Accounts. ### Prerequisites - A Cloud user account with Account Owner or Global Admin [role](/cloud/users#account-level-roles) permissions - Access to the Temporal Cloud UI or Temporal Cloud CLI (tcld) - Enable access to API Keys for your Account - To manage Service Accounts using the Temporal Cloud CLI (tcld), upgrade to the latest version of tcld (v0.18.0 or higher) using `brew upgrade tcld`. - If using a version of tcld less than v0.31.0, enable Service Account commands with `tcld feature toggle-service-account`. ### Create a Service Account Create a Service Account using the Temporal Cloud UI or tcld. While User identities are invited to Temporal Cloud, Service Accounts are created in Temporal Cloud. 1. Go to [Settings → Identities](https://cloud.temporal.io/settings/identities) 2. Click the `Create Service Account` button located near the top of the `Identities` page 3. Provide the following information: - **Name** (required) - **Description** (optional) - **Account Level Role** (required) - **Namespace Permissions** (optional) - Use this section of the Create Service Account page to grant the Service Account access to individual Namespaces 4. Click `Create Service Account` at the bottom of the page - A status message is displayed at the bottom right corner of the screen and on the next screen - You will be prompted to create an API Key for the Service Account (optional) 5. (Optional) Create API Key - It is recommended to create an API Key for the Service Account right after you create the Service Account, though you can create/manage API Keys for Service Accounts at any time - See the API Key [documentation](/cloud/api-keys) for more information on creating and managing API Keys To create a Service Account using tcld, use the `tcld service-account create` command: ``` tcld service-account create -n "sa_test" -d "this is a test SA" --ar "Read" ``` This example creates a Service Account with the name `"sa_test"`, description `"this is a test SA"`, and a `Read` Account Role. Creating a Service Account requires the following attributes: `name` and `account-role` (as above). You can also provide the Namespace Permissions for the Service Account using the `—-np` flag. Creating a Service Account returns the `ServiceAccountId` which is used to retrieve, update, or delete a Service Account. ### View Service Accounts View a single or all Service Account(s) using the Temporal Cloud UI or tcld. Service Accounts are listed on the `Identities` section of the `Settings` page, along with Users. To locate a Service Account: 1. Go to [Settings → Identities](https://cloud.temporal.io/settings/identities) 2. Select the `Service Accounts` filter To view all Service Accounts in your account using tcld, use the `tcld service-account list` command: ``` tcld service-account list ``` ### Delete a Service Account Delete a Service Account using the Temporal Cloud UI or tcld. When you delete a Service Account, all associated API keys are automatically deleted as well. Therefore, you don't need to manually remove API keys after deleting a Service Account. 1. Go to [Settings → Identities](https://cloud.temporal.io/settings/identities) 2. Find the relevant Service Account 3. Select the vertical ellipsis menu in the Service Account row 4. Select `Delete` 5. Confirm the delete action when prompted To delete a Service Account using tcld, use the `tcld service-account delete` command: ``` tcld service-account delete --service-account-id "e9d87418221548" ``` Use the tcld Service Account list command to validate the Service Account has been removed from the account. The Service Account is deleted when it is no longer visible in the output of . ### Update a Service Account {#update} Update a Service Account's description using the Temporal Cloud UI or tcld. 1. Go to [Settings → Identities](https://cloud.temporal.io/settings/identities) 2. Find the relevant Service Account 3. Select the vertical ellipsis menu in the Service Account row 4. Select `Edit` 5. Make changes to the Service Account - You can change the Service Account's name, description, Account Level Role, and Namespace Permissions 6. Click the `Save` button located in the bottom left of the screen - A status message is displayed at the bottom right corner of the screen Three different commands exist to help users update a Service Account using tcld: - `tcld service-account update`: to update a Service Account's name or description field - `tcld service-account set-account-role`: to update a Service Account's Account Role - `tcld service-account set-namespace-permissions`: to update a Service Account's Namespace Permissions Example: ``` tcld service-account update --id "2f68507677904e09b9bcdbf93380bb95" -d "new description" ``` ## Namespace-scoped Service Accounts {#scoped} There is a special type of Service Account, called a Namespace-scoped Service Account, which shares the same functionality as the Service Accounts above, but is limited (or scoped) to a single namespace. In particular, a Namespace-scoped Service Account must _always_ have: - A `Read` Account Role - A single Namespace Permission Note that a Namespace-scoped Service Account cannot be reassigned to a different Namespace after creation, but its Namespace permission can be modified (e.g. from `Read` to `Write`). Namespace-scoped Service Accounts are useful in situations when you need to restrict a client's access to a single Namespace. You can retrieve, update, and delete a Namespace-scoped Service Account using the same process and commands as above, but creation is slightly different. ### Permissions Unlike regular Service Accounts, which require a Global Admin or Account Owner role, Namespace-scoped Service Accounts can be created and managed by Namespace Admins. For example, an Account Developer with Namespace Admin for `test_ns` can create a Service Account scoped to `test_ns`. Global Admins and Account Owners can also create Namespace-scoped Service Accounts, as they implicitly have Namespace Admin rights for all Namespaces. ### Create a Namespace-scoped Service Account As with regular Service Accounts, Namespace-scoped Service Accounts can be created using Temporal Cloud UI or tcld. #### Using the Cloud UI {#scoped-ui} Currently, creating a Namespace-scoped Service Account from the Temporal Cloud UI happens on an individual [Namespace](/cloud/namespaces#manage-namespaces) page. If the current Namespace has API key authentication enabled, then there will be a `Generate API Key` button as a banner on the top of the Namespace page or in the `Authentication` section. By clicking on the `Generate API Key` button, a Namespace-scoped Service Account will be automatically created for the given Namespace (if one does not already exist) and an associated API key will be displayed. This key will have the maximum expiration time, which is 2 years. The resulting Namespace-scoped Service Account will be named `-service-account` and will have an `Admin` Namespace permission by default. #### Using tcld To create a Namespace-scoped Service Account with tcld, use the `tcld service-accounted create-scoped` command: ``` tcld service-account created-scoped -n "test-scoped-sa" --np "test-ns=Admin" ``` This example creates a Namespace-scoped Service Account for the Namespace `test-ns`, named `test-scoped-sa`, with `Admin` Namespace Permission. Note that the Account Role is omitted, since Namespace-scoped Service Accounts always have a `Read` Account Role. ### Lifecycle When a Namespace is deleted, all associated Namespace-scoped Service Accounts and their associated API keys are automatically deleted as well. Therefore, you do not need to manually remove Namespace-scoped Service Accounts and their API keys after deleting a Namespace. --- ## Manage user groups ## What are user groups? User groups can be used to help manage sets of users that should have the same access. Instead of separately assigning the same role to individual users, a user group can be created, assigned the desired roles, and then users added to the user group. This eases the toil of managing individual user permissions and can simplify access management. When a new role is needed, it can be added to the group once and all users' access will reflect the new role. User groups can be assigned both [account-level roles](/cloud/users#account-level-roles) and [namespace-level permissions](/cloud/users#namespace-level-permissions). One user can be assigned to many groups. In the event that a user's group memberships have multiple roles for the same resource, the user will have an effective role of the most permissive of the permissions. For example if `Group A` grants a read-only role to a namespace, but `Group B` grants a write role to a namespace then a user that belongs to both `Group A` and `Group B` would have the write role to the namespace. [Service accounts](/cloud/service-accounts) cannot be assigned to user groups. Only users with the Account Owner or Global Admin account-level [role](/cloud/users#account-level-roles) can manage user groups. ## How SCIM groups work with user groups {#scim-groups} [SCIM groups](/cloud/scim) work similarly to user groups with respect to role assignment. Unlike a user group, the lifecycle of a SCIM group is fully managed by the SCIM integration which means: 1. SCIM groups cannot be created except through the SCIM integration 1. SCIM groups cannot be deleted except through the SCIM integration 1. SCIM group membership is managed through the SCIM integration User groups and SCIM groups can be used simultaneously in a single Temporal Cloud account. One user may belong to multiple SCIM groups and to multiple user groups. Using user group and SCIM groups together can be useful when the groups defined in the identity provider (IDP) don't map cleanly to the access you need to grant in Temporal Cloud. Instead of having to update the IDP (which is often sensitive and time-consuming), you can use Temporal Cloud user groups to manage access. :::info All user group administration requires an Account Owner or Global Admin account-level [role](/cloud/users#account-level-roles). ::: ## How to create a user group in your Temporal Cloud account {#create-group} User group names must be 3-64 characters long and can only contain lowercase letters, numbers, hyphens, and underscores. 1. Navigate to the [identities page](https://cloud.temporal.io/settings/identities) 1. Click the Create Group button 1. Name the group 1. Assign an account-level role to the group (you can assign namespace-level permissions after the group is created) 1. Click Save See the [`tcld` user-group create](/cloud/tcld/user-group/#create) command reference for details. See the [Terraform provider documentation](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/group) for details. ## How to assign roles to a user group {#assign-group-roles} To edit the account role of a group: 1. Navigate to the [identities page](https://cloud.temporal.io/settings/identities) 1. Find the group to edit (You can filter the list of identities to only show groups to find the relevant group by clicking the Groups tab on the table) 1. Click Edit Group 1. Click the Account Role dropdown 1. Select a new account role 1. Click Save To add namespace permissions to a group: 1. Navigate to the [identities page](https://cloud.temporal.io/settings/identities) 1. Find the group to edit (You can filter the list of identities to only show groups to find the relevant group by clicking the Groups tab on the table) 1. Click Edit Group 1. Click Add Namespaces 1. Under Grant Access to a Namespace, search for the namespace you’d like to add permissions for 1. Select the namespace 1. Click the pencil to edit the permissions for the selected namespace 1. Click Save To edit or remove namespace permissions from a group: 1. Click Edit Group 1. Click the pencil on a permission to edit it, or the trash can to delete it 1. Click Save See the [`tcld` user-group set-access](/cloud/tcld/user-group/#set-access) command reference for details. See the [Terraform provider documentation](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/group) for details. ## How to manage users in a group {#assign-group-members} To add users to the group: 1. Navigate to the [identities page](https://cloud.temporal.io/settings/identities) 1. Find the group to edit (You can filter the list of identities to only show groups to find the relevant group by clicking the Groups tab on the table) 1. Click Edit Group 1. Under Members, search for the user you’d like to add 1. Select the user 1. Click Save To remove a user from the group: 1. Click Edit Group 1. Under Members, click the X next to the user you’d like to remove 1. Click Save See the [`tcld` user-group add-users](/cloud/tcld/user-group/#add-users) and the [`tcld` user-group remove-users](/cloud/tcld/user-group/#remove-users) command reference for details. See the [Terraform provider documentation](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/group) for details. ## Delete a user group 1. Navigate to the [identities page](https://cloud.temporal.io/settings/identities) 1. Find the group to edit (You can filter the list of identities to only show groups to find the relevant group by clicking the Groups tab on the table) 1. Click the dropdown next to the edit button 1. Click Delete 1. Confirm by clicking Delete See the [`tcld` user-group delete](/cloud/tcld/user-group/#delete) command reference for details. See the [Terraform provider documentation](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/group) for details. --- ## User management :::caution Access to Temporal Cloud can be authorized through email and password, Google single sign-on, Microsoft single sign-on, or SAML, depending on your setup. If you are using Google OAuth for single sign-on and an email address is not associated with a Google Account, the user must follow the instructions in the [Use an existing email address](https://support.google.com/accounts/answer/27441?hl=en#existingemail) section of [Create a Google Account](https://support.google.com/accounts/answer/27441). **Important:** Do _not_ create a Gmail account when creating a Google Account. If your organization uses Google Workspace or Microsoft Entra ID, and your IT administrator has enabled controls over single sign-on permissions, then you will need to work with your IT administrator to allow logins to Temporal Cloud. ::: When a user is created in Temporal Cloud, they receive an invitation email with a link. They must use this link to finalize their setup and access Temporal Cloud. Accounts with SAML configurations can ignore this email. However, those using Google/Microsoft SSO or email and password authentication need to accept the invitation link for their initial login to Temporal Cloud. For future logins, they must use the same authentication method they originally signed up with. :::info To invite users, a user must have the Global Admin or Account Owner account-level [role](/cloud/users#account-level-roles). ::: ### Roles and permissions Each user in Temporal Cloud is assigned a role. Each user can be assigned permissions for individual Namespaces. - [Account-level roles](/cloud/users#account-level-roles) - [Namespace-level permissions](/cloud/users#namespace-level-permissions) To invite users using the Temporal Cloud UI: 1. In Temporal Web UI, select **Settings** in the left portion of the window. 1. On the **Settings** page, select **Create Users** in the upper-right portion of the window. 1. On the **Create Users** page in the **Email Addresses** box, type or paste one or more email addresses. 1. In **Account-Level Role**, select a [Role](/cloud/users#account-level-roles). The Role applies to all users whose email addresses appear in **Email Addresses**. 1. If the account has any Namespaces, they are listed under **Grant access to Namespaces**. To add a permission, select the checkbox next to a Namespace, and then select a [permission](/cloud/users#namespace-level-permissions). Repeat as needed. 1. When all permissions are assigned, select **Send Invite**. Temporal sends an email message to each user. To join Temporal Cloud, a user must select **Accept Invite** in the message. To invite users using tcld, see the [tcld user invite](/cloud/tcld/user/#invite) command. Temporal sends an email message to the specified user. To join Temporal Cloud, the user must select **Accept Invite** in the message. You can invite users pragmatically using the Cloud Ops API. 1. Create a connection to your Temporal Service using the Cloud Operations API. 2. Use the [CreateUser service](https://github.com/temporalio/api-cloud/blob/main/temporal/api/cloud/cloudservice/v1/service.proto) to create a user. ### Frequently Asked Questions #### Can the same email be used across different Temporal Cloud accounts? No — each email address can only be associated with a single Temporal Cloud account. If you need access to multiple accounts, you’ll need a separate invite for each one using a different email address. #### Can I use Google or Microsoft SSO after signing up with email and password? If you originally signed up for Temporal Cloud using an email and password, you won’t be able to log in using Google or Microsoft single sign-on. If you prefer SSO, ask your Account Owner to delete your current user and send you a new invitation. During re-invitation, be sure to sign up using your preferred authentication method. #### How do I complete the `Secure Your Account` step? If you signed up to Temporal Cloud using an email and password, you're required to set up multi-factor authentication (MFA) for added security. Currently, only authenticator apps are supported as an additional factor (such as Google Authenticator, Microsoft Authenticator, and Authy). To proceed: 1. Download a supported authenticator app on your mobile device. 2. Scan the QR code shown on the **Secure Your Account** screen. 3. Enter the verification code from your app to complete MFA setup. 4. Securely store your recovery code. This code allows you to access your account if you lose access to your authenticator app. Once MFA is configured, you’ll be able to continue using Temporal Cloud. #### What if I lose access to my authenticator app? If you lose access to your authenticator app, you can still log in by clicking **Try another method** on the MFA screen. From there, you can either: - Enter your recovery code (provided when you first set up MFA) - Receive a verification code through email Once you're logged in, you can reset your authenticator app by navigating to **My Profile** > **Password and Authentication** and then clicking **Authenticator App** > **Remove method**. #### How do I reset my password? If you're currently logged in and would like to change your password, click your profile icon at the top right of the Temporal Cloud UI, navigate to **My Profile** > **Password and Authentication**, and then click **Reset Password**. If you're not currently logged in, navigate to the login page of the Temporal Cloud UI, enter your email address, click **Continue**, and then select **Forgot password**. In both cases, you will receive an email with instructions on how to reset your password. --- ## Manage users - [How to invite users to your Temporal Cloud account](#invite-users) - [What are the account-level roles?](#account-level-roles) - [What are the Namespace-level permissions?](#namespace-level-permissions) - [How to update an account-level Role in Temporal Cloud](#update-roles) - [How to update Namespace-level permissions in Temporal Cloud](#update-permissions) - [How to delete a user from your Temporal Cloud account](#delete-users) - [How to troubleshoot account access issues](#troubleshoot-access) ## How to invite users to your Temporal Cloud account {#invite-users} ## What are the account-level roles for users in Temporal Cloud? {#account-level-roles} When an Account Owner or Global Admin invites a user to join an account, they select one of the following roles for that user: - **Global Admin** - Has full administrative permissions across the account, including users and usage - Can create and manage [Namespaces](/namespaces) and [Nexus Endpoints](/nexus/endpoints) - Has Namespace Admin [permissions](#namespace-level-permissions) on all Namespaces in the account. This permission cannot be revoked - **Developer** - Can create Namespaces - Is granted [Namespace Admin](/cloud/users#namespace-level-permissions) permission for each Namespace they create. This permission can be revoked - Can create and manage Nexus Endpoints where they are a [Namespace Admin](/cloud/users#namespace-level-permissions) on the Endpoint's target Namespace - **Read-Only** - Can read information - Can be granted Namespace [permissions](#namespace-level-permissions), for example to read or write Workflow state in a given Namespace - Can view all Nexus Endpoints in the account, which have separate [runtime access controls](/nexus/security#runtime-access-controls) In addition, there are two roles that the Global Admin cannot assign: - **Account Owner** - Has full administrative permissions across the account, including users, usage and [billing](/cloud/billing-and-cost) - Can create and manage Namespaces and Nexus Endpoints - Has Namespace Admin [permissions](#namespace-level-permissions) on all [Namespaces](/namespaces) in the account. This permission cannot be revoked - **Finance Admin** - Has permissions to view [billing](/cloud/billing-and-cost) information and update payment information - Otherwise, has the same permissions as Account Read-only users - Can be assigned to Service Accounts by a Global Admin, but otherwise can only be assigned by an Account Owner :::note Default Role When the account is created, the initial user who logs in is automatically assigned the Account Owner role. If your account does not have an Account Owner, please reach out to [Support](https://temporalsupport.zendesk.com/) to assign the appropriate individual to this role. ::: ## Using the Account Owner role The Account Owner role (i.e., users with the Account Owner system role) holds the highest level of access in the system. This role configures account-level parameters and manages Temporal billing and payment information. It allows users to perform all actions within the Temporal Cloud account. :::tip Best Practices Temporal strongly recommends the following precautions when assigning the Account Owner role to users: - Assign the role to at least two users in your organization. Otherwise, limit the number of users with this role. - Associate a person’s direct email address to the Account Owner, rather than a shared or generic address, so Temporal Support can contact the right person in urgent situations. This latter rule is useful for anyone on your team who may need to be contacted urgently, regardless of their Account role. ::: ## What are the Namespace-level permissions for users in Temporal Cloud? {#namespace-level-permissions} An Account Owner or Global Admin can assign permissions for any [Namespace](/namespaces) in an account. A Developer can assign permissions for a Namespace they create. For a Namespace, a user can have one of the following permissions: - **Namespace Admin:** - Can [manage the Namespace](/cloud/namespaces#manage-namespaces) including identities and permissions - Can create, rename, update, and delete [Workflows](/workflows) within the Namespace - **Write:** - Can create, rename, update, and delete [Workflows](/workflows) within the Namespace - **Read-Only:** - Can only read information from the Namespace ## How to update an account-level role in Temporal Cloud {#update-roles} With Global Admin or Account Owner privileges, you can update any user's account-level [role](#account-level-roles) using either the Web UI or the tcld CLI utility. The Account Owner role can only be granted by existing Account Owners. For security reasons, changes to the Account Owner role must be made through Temporal Support. To change or delete an Account Owner, you must submit a [support ticket](https://temporalsupport.zendesk.com/). {/* How to update an account-level role in Temporal Cloud using Web UI */} ### How to update an account-level role using Web UI 1. In Temporal Web UI, select **Settings** in the left portion of the window. 1. On the **Settings** page, select the user. 1. On the user profile page, select **Edit User**. 1. On the **Edit User** page in **Account Level Role**, select the role. 1. Select **Save**. {/* How to update an account-level role in Temporal Cloud using tcld */} ### How to update an account-level role using tcld For details, see the [tcld user set-account-role](/cloud/tcld/user/#set-account-role) command. ## How to update Namespace-level permissions in Temporal Cloud {#update-permissions} You can update Namespace-level [permissions](#namespace-level-permissions) by using either Web UI or tcld. {/* How to update Namespace-level permissions for a Namespace in Temporal Cloud using Web UI */} ### How to use the Web UI to update a user's permissions across multiple Namespaces 1. In Temporal Web UI, select **Namespaces** in the left portion of the window. 1. On the **Namespaces** page, select the Namespace. 1. If necessary, scroll down to the list of permissions 1. On the user profile page in **Namespace permissions**, select the Namespace. 1. On the Namespace page in **Account Level Role**, select the role. 1. Select **Save**. {/* How to update Namespace-level permissions for a user in Temporal Cloud using Web UI */} ### How to use the Web UI to update permissions for multiple users within a single Namespace :::note A user with the Account Owner or Global Admin account-level [role](#account-level-roles) has Namespace Admin permissions for all Namespaces. ::: 1. In Temporal Web UI, select **Settings** in the left portion of the window. 1. On the **Settings** page in the **Users** tab, select the user. 1. On the user profile page, select **Edit User**. 1. On the **Edit User** page in **Namespace permissions**, change the permissions for one or more Namespaces. 1. Select **Save**. {/* How to update an account-level role in Temporal Cloud using tcld */} ### How to use tcld to update Namespace-level permissions For details, see the [tcld user set-namespace-permissions](/cloud/tcld/user/#set-namespace-permissions) command. ## How to delete a user from your Temporal Cloud account {#delete-users} You can delete a user from your Temporal Cloud Account by using either Web UI or tcld. :::info To delete a user, a user must have the Account Owner or Global Admin account-level [role](#account-level-roles). ::: {/* How to delete a user from your Temporal Cloud account using Web UI */} ### How to update an account-level role using Web UI 1. In Temporal Web UI, select **Settings** in the left portion of the window. 1. On the **Settings** page, find the user and, on the right end of the row, select **Delete**. 1. In the **Delete User** dialog, select **Delete**. You can delete a user in two other ways in Web UI: - User profile page: Select the down arrow next to **Edit User** and then select **Delete**. - **Edit User** page: Select **Delete User**. {/* How to delete a user from your Temporal Cloud account using tcld */} ### How to update an account-level role using tcld For details, see the [tcld user delete](/cloud/tcld/user/#delete) command. ## Account-level roles and Namespace-level permissions {#account-level-roles-and-namespace-level-permissions} Temporal account-level roles and Namespace-level permissions provide access to specific Temporal Workflow and Temporal Cloud operational APIs. The following table provides the API details associated with each account-level role and Namespace-level permission. :::note Account Owners and Global Admins have Namespace Admin permissions on all Namespaces. ::: #### Account-level role details This table provides API-level details for the permissions granted to a user through account-level roles. These permissions are configured per user. | Permission | Read-only | Developer | Finance Admin | Global Admin | Account Owner | | --------------------------------- | --------- | --------- | ------------- | ------------ | ------------- | | CountIdentities | ✔ | ✔ | ✔ | ✔ | ✔ | | CreateAccountAuditLogSink | | | | ✔ | ✔ | | CreateAPIKey | ✔ | ✔ | ✔ | ✔ | ✔ | | CreateNamespace | | ✔ | | ✔ | ✔ | | CreateNexusEndpoint | | ✔ | | ✔ | ✔ | | CreateServiceAccount | | | | ✔ | ✔ | | CreateServiceAccountAPIKey | | | | ✔ | ✔ | | CreateStripeCustomerPortalSession | | | ✔ | | ✔ | | CreateUser | | | | ✔ | ✔ | | DeleteAccountAuditLogSink | | | | ✔ | ✔ | | DeleteAPIKey | ✔ | ✔ | ✔ | ✔ | ✔ | | DeleteNexusEndpoint | | ✔ | | ✔ | ✔ | | DeleteServiceAccount | | | | ✔ | ✔ | | DeleteUser | | | | ✔ | ✔ | | GetAccount | ✔ | ✔ | ✔ | ✔ | ✔ | | GetAccountAuditLogSink | | | | ✔ | ✔ | | GetAccountAuditLogSinks | | | | ✔ | ✔ | | GetAccountFeatureFlags | ✔ | ✔ | ✔ | ✔ | ✔ | | GetAccountLimits | ✔ | ✔ | ✔ | ✔ | ✔ | | GetAccountSettings | ✔ | ✔ | ✔ | ✔ | ✔ | | GetAccountUsage | | | | ✔ | ✔ | | GetAPIKey | ✔ | ✔ | ✔ | ✔ | ✔ | | GetAPIKeys | ✔ | ✔ | ✔ | ✔ | ✔ | | GetAsyncOperation | ✔ | ✔ | ✔ | ✔ | ✔ | | GetAuditLogs | | | | ✔ | ✔ | | GetDecodedCertificate | ✔ | ✔ | ✔ | ✔ | ✔ | | GetIdentities | ✔ | ✔ | ✔ | ✔ | ✔ | | GetIdentity | ✔ | ✔ | ✔ | ✔ | ✔ | | GetNamespaces | ✔ | ✔ | ✔ | ✔ | ✔ | | GetNamespacesUsage | | | | ✔ | ✔ | | GetNexusEndpoint | ✔ | ✔ | ✔ | ✔ | ✔ | | GetNexusEndpoints | ✔ | ✔ | ✔ | ✔ | ✔ | | GetRegion | ✔ | ✔ | ✔ | ✔ | ✔ | | GetRegions | ✔ | ✔ | ✔ | ✔ | ✔ | | GetRequestStatus | ✔ | ✔ | ✔ | ✔ | ✔ | | GetRequestStatuses | | | | ✔ | ✔ | | GetRequestStatusesForNamespace | ✔ | ✔ | ✔ | ✔ | ✔ | | GetRequestStatusesForUser | ✔ | ✔ | ✔ | ✔ | ✔ | | GetRoles | ✔ | ✔ | ✔ | ✔ | ✔ | | GetRolesByPermissions | ✔ | ✔ | ✔ | ✔ | ✔ | | GetServiceAccount | ✔ | ✔ | ✔ | ✔ | ✔ | | GetServiceAccounts | ✔ | ✔ | ✔ | ✔ | ✔ | | GetStripeInvoice | | | ✔ | | ✔ | | GetUser | ✔ | ✔ | ✔ | ✔ | ✔ | | GetUsers | ✔ | ✔ | ✔ | ✔ | ✔ | | GetUsersWithAccountRoles | ✔ | ✔ | ✔ | ✔ | ✔ | | InviteUsers | | | | ✔ | ✔ | | ListCreditLedgerEntries | | | ✔ | | ✔ | | ListGrants | | | ✔ | | ✔ | | ListMetronomeInvoices | | | ✔ | | ✔ | | ListMetronomeInvoicesForNamespace | | | ✔ | | ✔ | | ListNamespaces | ✔ | ✔ | ✔ | ✔ | ✔ | | ListPromotionGrantBalances | | | ✔ | | ✔ | | ResendUserInvite | | | | ✔ | ✔ | | SetAccountSettings | | | | ✔ | ✔ | | SyncCurrentUserInvite | ✔ | ✔ | ✔ | ✔ | ✔ | | UpdateAccount | | | | ✔ | ✔ | | UpdateAccountAuditLogSink | | | | ✔ | ✔ | | UpdateAPIKey | ✔ | ✔ | ✔ | ✔ | ✔ | | UpdateNexusEndpoint | | ✔ | | ✔ | ✔ | | UpdateServiceAccount | | | | ✔ | ✔ | | UpdateUser | | | | ✔ | ✔ | | ValidateAccountAuditLogSink | | | | ✔ | ✔ | #### Namespace-level permissions details This table provides API-level details for the permissions granted to a user through Namespace-level permissions. These permissions are configured per Namespace per user. | Permission | Read | Write | Namespace Admin | | ---------------------------------- | ---- | ----- | --------------- | | CountWorkflowExecutions | ✔ | ✔ | ✔ | | CreateExportSink | | ✔ | ✔ | | CreateSchedule | | ✔ | ✔ | | DeleteExportSink | | ✔ | ✔ | | DeleteNamespace | | ✔ | ✔ | | DeleteSchedule | | ✔ | ✔ | | DescribeBatchOperation | ✔ | ✔ | ✔ | | DescribeNamespace | ✔ | ✔ | ✔ | | DescribeSchedule | ✔ | ✔ | ✔ | | DescribeTaskQueue | ✔ | ✔ | ✔ | | DescribeWorkflowExecution | ✔ | ✔ | ✔ | | FailoverNamespace | | | ✔ | | GetExportSink | ✔ | ✔ | ✔ | | GetExportSinks | ✔ | ✔ | ✔ | | GetNamespace | ✔ | ✔ | ✔ | | GetNamespaceUsage | ✔ | ✔ | ✔ | | GetReplicationStatus | ✔ | ✔ | ✔ | | GetSearchAttributes | ✔ | ✔ | ✔ | | GetUsersForNamespace | ✔ | ✔ | ✔ | | GetWorkerBuildIdCompatibility | ✔ | ✔ | ✔ | | GetWorkerTaskReachability | ✔ | ✔ | ✔ | | GetWorkflowExecutionHistory | ✔ | ✔ | ✔ | | GetWorkflowExecutionHistoryReverse | ✔ | ✔ | ✔ | | GlobalizeNamespace | | | ✔ | | ListBatchOperations | ✔ | ✔ | ✔ | | ListClosedWorkflowExecutions | ✔ | ✔ | ✔ | | ListExportSinks | ✔ | ✔ | ✔ | | ListFailoverHistoryByNamespace | ✔ | ✔ | ✔ | | ListOpenWorkflowExecutions | ✔ | ✔ | ✔ | | ListReplicaStatus | ✔ | ✔ | ✔ | | ListScheduleMatchingTimes | ✔ | ✔ | ✔ | | ListSchedules | ✔ | ✔ | ✔ | | ListTaskQueuePartitions | ✔ | ✔ | ✔ | | ListWorkflowExecutions | ✔ | ✔ | ✔ | | PatchSchedule | | ✔ | ✔ | | PollActivityTaskQueue | | ✔ | ✔ | | PollWorkflowTaskQueue | | ✔ | ✔ | | QueryWorkflow | ✔ | ✔ | ✔ | | RecordActivityTaskHeartbeat | | ✔ | ✔ | | RecordActivityTaskHeartbeatById | | ✔ | ✔ | | RenameCustomSearchAttribute | | ✔ | ✔ | | RequestCancelWorkflowExecution | | ✔ | ✔ | | ResetStickyTaskQueue | | ✔ | ✔ | | ResetWorkflowExecution | | ✔ | ✔ | | RespondActivityTaskCanceled | | ✔ | ✔ | | RespondActivityTaskCanceledById | | ✔ | ✔ | | RespondActivityTaskCompleted | | ✔ | ✔ | | RespondActivityTaskCompletedById | | ✔ | ✔ | | RespondActivityTaskFailed | | ✔ | ✔ | | RespondActivityTaskFailedById | | ✔ | ✔ | | RespondQueryTaskCompleted | | ✔ | ✔ | | RespondWorkflowTaskCompleted | | ✔ | ✔ | | RespondWorkflowTaskFailed | | ✔ | ✔ | | SetUserNamespaceAccess | | | ✔ | | SignalWithStartWorkflowExecution | | ✔ | ✔ | | SignalWorkflowExecution | | ✔ | ✔ | | StartBatchOperation | | ✔ | ✔ | | StartWorkflowExecution | | ✔ | ✔ | | StopBatchOperation | | ✔ | ✔ | | TerminateWorkflowExecution | | ✔ | ✔ | | UpdateExportSink | | ✔ | ✔ | | UpdateNamespace | | ✔ | ✔ | | UpdateSchedule | | ✔ | ✔ | | UpdateUserNamespacePermissions | | | ✔ | | ValidateExportSink | | ✔ | ✔ | | ValidateGlobalizeNamespace | | | ✔ | Account Owners and Global Admins will have Namespace Admin permissions on Namespaces. ## How to troubleshoot account access issues {#troubleshoot-access} ### Why can't I sign in after my email domain changed? {#email-domain-change} If your organization changed its email domain (for example, from `@oldcompany.com` to `@newcompany.com`), you may be unable to sign in to Temporal Cloud with your existing account. **Why this happens:** When you sign in using "Continue with Google" or "Continue with Microsoft", Temporal Cloud identifies your account by your email address. If your email address changes, Temporal Cloud sees this as a different identity and cannot match it to your existing account. **How to resolve this:** [Create a support ticket](/cloud/support#support-ticket) with the following information: - Your previous email address (the one originally used to access Temporal Cloud) - Your new email address - Your Temporal Cloud Account Id (if known) Temporal Support can update your account to use your new email address. :::tip Use SAML for enterprise identity management If your organization frequently changes email domains or wants centralized control over user authentication, consider using [SAML authentication](/cloud/saml). With SAML, your identity provider (IdP) manages user identities, and email domain changes can be handled within your IdP without affecting Temporal Cloud access. ::: --- ## Enable High Availability :::tip Support, stability, and dependency info Same-region Replication and Multi-cloud Replication are in [Public Preview](/evaluate/development-production-features/release-stages#public-preview). Multi-region Replication is in [General Availability](/evaluate/development-production-features/release-stages#general-availability) ::: You can enable High Availability features ([Single-region Replication](/cloud/high-availability#same-region-replication), [Multi-region Replication](/cloud/high-availability#multi-region-replication), or [Multi-cloud Replication](/cloud/high-availability#multi-cloud-replication)) for a new or existing Namespace by adding a replica. When you add a replica, Temporal Cloud begins asynchronously replicating ongoing and existing Workflow Executions. Not all replication options are available in all regions. See the [region documentation](/cloud/regions) for the replication options available in each region. Using private network connectivity with a HA namespace requires extra setup. See [Connectivity for HA](/cloud/high-availability/ha-connectivity). There are charges associated with Replication and enabling High Availability features. For pricing details, visit Temporal Cloud's [Pricing](/cloud/pricing) page. ## Create a Namespace with High Availability features {#create} To create a new Namespace with High Availability features, you can use the Temporal Cloud UI or the tcld command line utility. 1. Visit Temporal Cloud in your Web browser. 1. During Namespace creation, specify the primary [region](/cloud/regions) for the Namespace. 1. Select "Add a replica". 1. Choose the [region](/cloud/regions) for the replica. The web interface will present an estimated time for replication to complete. This time is based on your selection and the size and scale of the Workflows in your Namespace. At the command line, enter: ``` tcld namespace create \ --namespace . \ --region \ --region ``` Specify the [region codes](/cloud/regions) as arguments to the two `--region` flags. - Using the same region replicates to an isolation domain within that region. - Using a different region replicates across regions. If using API key authentication with the `--api-key` flag, you must add it directly after the tcld command and before `namespace create`. Temporal Cloud sends an email alert to all Namespace Admins once your Namespace replica is ready for use. ## Add High Availability to an existing Namespace {#upgrade} A replica can be added after a namespace has already been created. 1. Visit Temporal Cloud Namespaces in your Web browser. 1. Navigate to the Namespace details page. 1. Select the “Add a replica” button. 1. Choose the [region](/cloud/regions) for the replica. The web interface will present an estimated time for replication to complete. This time is based on your selection and the size and scale of the Workflows in your Namespace. Temporal Cloud sends an email alert to all Namespace Admins once your Namespace replica is ready for use. At the command line, enter: ``` tcld namespace add-region \ --namespace . \ --region ``` Specify the [region code](/cloud/regions) of the region where you want to create the replica as an argument to the `--region` flag. If using API key authentication with the `--api-key` flag, you must add it directly after the tcld command and before `namespace add-region`. Temporal Cloud sends an email alert once your Namespace is ready for use. ## Change a replica location {#changing} Temporal Cloud can't change replica locations directly. To change a replica's location, you need to remove the replica and add a new one. :::caution We discourage changing the location of your replica for deployed applications, except under exceptional circumstances. If you remove your replica, you lose the availability guarantees of the Namespace, and it can take time to add another replica. If you remove a replica from a region, you must wait seven days before you can re-enable High Availability (HA) in that same location. During this period, you may add a replica to a different region, provided you have not had one active there within the last seven days. ::: Follow these steps to change the replica location: 1. [Remove your replica](#disable). This disables High Availability for your Namespace. 2. [Add a new replica](#upgrade) to your Namespace. You will receive an email alert once your Namespace is ready for use. ## Disable High Availability (remove a replica) {#disable} To disable High Availability features on a Namespace, remove the replica from that Namespace. Removing a replica disables all High Availability features: - Discontinues replication of the Workflows in the Namespace. - Disables the Namespace's ability to trigger a failover to a different region or cloud. - For Workers and Clients that use API keys, removing a replica requires connecting to the Namespace using the published [regional endpoint](/cloud/regions) for the Namespace's region. - Disables connecting to the Namespace with API keys and the Namespace's endpoint or the replica region's regional endpoint. - Ends High Availability charges. :::caution After removing a Namespace's replica, you cannot re-enable replication on that same Temporal Cloud Namespace in the same location for seven days. ::: Follow these steps to remove a replica from a Namespace: 1. If you are using API keys for authentication on this Namespace, configure your Workers and Clients that use API keys to [connect with the regional Temporal Cloud endpoint](/cloud/api-keys#namespace-authentication) for the Namespace's primary region. 1. Navigate to the Namespace details page in Temporal Cloud 1. Select the option to "Remove Replica" on the "Region" card. First, if you are using API keys for authentication on this Namespace, configure your Workers and Clients that use API keys to [connect with the regional Temporal Cloud endpoint](/cloud/api-keys#namespace-authentication) for the Namespace's primary region. Then, run the following command to remove the replica: ``` tcld namespace delete-region \ --api-key \ --namespace . \ --region ``` :::important To remove a replica from a Namespace with API keys enabled, you need assistance from Temporal Support. Please [contact support](/cloud/support#support-ticket) with the Namespace ID of the Namespace where you want to remove the replica. You must confirm that Workers and Clients with API keys have been configured to connect to the Namespace using the published [regional endpoint](/cloud/regions). This safeguard ensures that Workers and Clients continue running uninterrupted once Temporal Support removes the replica. After the replica is removed, if Workers and Clients with API keys attempt to use the Namespace endpoint or the former replica's regional endpoint, their requests will fail. ::: --- ## Configure and Trigger Failovers In case of an incident or an outage, Temporal will automatically your Namespace from the primary to the replica. This lets Workflow Executions continue with minimal interruptions or data loss. You can also [manually initiate failovers](/cloud/high-availability/failovers) based on your situational monitoring or for testing. Returning control from the replica to the primary is called a . The replica is active for a brief duration during an incident. After the incident, Temporal fails back to the primary. ## Failovers Occasionally, a Namespace may become temporarily unavailable due to an unexpected incident. Temporal Cloud detects these issues using regular health checks. ### Health checks Temporal Cloud monitors error rates, latencies, and infrastructure problems, such as request timeouts. If it finds unhealthy conditions where indicators exceed the allowed thresholds, Temporal automatically switches the primary to the replica. In most cases, the replica is unaffected by the issue. This process is known as failover. ### Automatic failovers Failovers prevent data loss and application interruptions. Existing Workflows continue, and new Workflows start as the incident is addressed. Once the incident is resolved, Temporal Cloud performs a "failback," shifting Workflow Execution processing back to the original Namespace. Temporal Cloud handles failovers automatically, ensuring continuity without manual intervention. For more control over the failover process, you can [disable automated failovers](/cloud/high-availability/failovers#disabling-temporal-initiated). :::tip You can test the failover of Namespace with High Availability features by manually triggering a failover using the UI or the 'tcld' CLI utility. In most scenarios, we recommend you let Temporal handle failovers for you. After failover, be aware of the following points: - When working with Multi-region Namespaces, your CNAME may change. For example, it may switch from aws-us-west-1.region.tmprl.cloud to aws-us-east-1.region.tmprl.cloud. This change doesn't affect same-region Namespaces. - Your Namespace endpoint _will not change_. If it is `my_namespace.my_account.tmprl.cloud:7233` before failover, it will be `my_namespace.my_account.tmprl.cloud:7233` after failover. ::: ### The failover process {#failover-process} Temporal's automated failover process works as follows: - During normal operation, the primary asynchronously copies operations and metadata to its replica, keeping them in sync. - If the primary becomes unavailable, Temporal detects the issue through health checks. It automatically switches to the replica, using one of its available [failover scenarios](#scenarios). - The replica takes over the active role and becomes the primary. Operations continue with minimal disruption. - When the original primary recovers, the roles can either switch back (failback, by default) or remain as they are, based on your Namespace settings. Automatic role switching with failover and failback minimizes downtime for consistent availability. :::info A Namespace failover, which updates the "active region" field in the Namespace record, is a metadata update. This update is replicated through the Namespace metadata mechanism. ::: ## Failover scenarios {#scenarios} The Temporal Cloud failover mechanism supports several modes for executing Namespace failovers. These modes include graceful failover ("handover"), forced failover, and a hybrid mode. The hybrid mode is Temporal Cloud’s default Namespace behavior. ### Graceful failover (handover) {#graceful-failover} In this mode, Temporal Cloud fully processes and drains replication Tasks. Temporal Cloud pauses traffic to the Namespace before the failover. Graceful failover prevents the loss of progress and avoids data conflicts. During graceful failover, the Namespace may experience a brief period of unavailability. This duration can be limited by the caller and defaults to 10 seconds. If the system is unable to reach a consistent state within this timeout, the failover attempt is aborted and the Namespace reverts to its previous state. During this unavailable period: - Existing Workflows stop progress. - Temporal Cloud returns a "Service unavailable error". This error is retryable by the Temporal SDKs. - State transitions will not happen and tasks are not dispatched. - User requests like start/signal Workflow are rejected. - Operations are paused during handover. This mode favors _consistency_ over availability. ### Forced failover {#forced-failover} In this mode, a Namespace immediately activates in the replica. Events not replicated due to replication lag undergo conflict resolution upon reaching the new active Namespace. This mode prioritizes _availability_ over consistency. ### Hybrid failover mode {#hybrid-failover} While graceful failovers are preferred for consistency, they aren’t always practical. Temporal Cloud’s hybrid failover mode (the default mode) limits the initial graceful failover attempt to 10 seconds or less. During this period: - Existing Workflows stop progress. - Temporal Cloud returns a "Service unavailable error", which is retried by SDKs. If the graceful approach doesn’t resolve the issue, Temporal Cloud automatically switches to a forced failover. This strategy balances _consistency_ and _availability_ requirements. ### Scenario summary | Failover Scenario | Characteristics | | ---------------------------- | ------------------------------------------------------- | | Graceful failover (handover) | Favors _consistency_ over availability. | | Forced failover | Prioritizes _availability_ over consistency. | | Hybrid failover mode | Balances _consistency_ and _availability_ requirements. | ## Network partitions At any time only the primary or the replica is active. The only exception occurs in the event of a [network partition](https://en.wikipedia.org/wiki/Network_partition), when a Network splits into separate subnetworks. Should this occur, you can promote a replica to active status. **Caution:** This temporarily makes both regions active. After the network partition is resolved and communication between the isolation domains/regions is restored, a conflict resolution algorithm determines whether the primary or replica remains active. :::tip In traditional active/active replication, multiple nodes serve requests and accept writes simultaneously, ensuring strong synchronous data consistency. In contrast, with a Temporal Cloud Namespace with High Availability Features, only the primary accepts requests and writes at any given time. Workflow History Events are written to the primary first and then asynchronously replicated to the replica, ensuring that the replica remains in sync. ::: ## Conflict resolution {#conflict-resolution} Namespaces with replicas rely on asynchronous event replication. Updates made to the primary may not immediately be reflected in the replica due to , particularly during failovers. In the event of a non-graceful failover, replication lag may cause a temporary setback in Workflow progress. Namespaces that aren't replicated can be configured to provide _at-most-once_ semantics for Activities execution when a retry policy's [maximum attempts](https://docs.temporal.io/retry-policies#maximum-attempts) is set to 0. High Availability Namespaces provide _at-least-once_ semantics for execution of Activities. Completed Activities _may_ be re-dispatched in a newly active Namespace, leading to repeated executions. When a Workflow Execution is updated in a newly active replica following a failover, events from the previously active Namespace that arrive after the failover can't be directly applied. At this point, Temporal Cloud has forked the Workflow History. After failover, Temporal Cloud creates a new branch history for execution, and begins its process. The Temporal Service ensures that Workflow Histories remain valid and are replayable by SDKs post-failover or after conflict resolution. This capability is crucial for ensuring Workflow Executions continue forward without losing progress, and for maintaining consistency across replication, even during incidents that cause disruptions in replication. ## Perform a manual failover {#triggering-failovers} For some users, Temporal's automated health checks and failovers don't provide sufficient nuance and control. For this reason, you can manually trigger failovers based on your own custom alerts and for testing purposes. This section explains how and what to expect afterward. :::warning Check Your Replication Lag Always check the before initiating a failover. A forced failover when there is a significant replication lag has a higher likelihood of rolling back Workflow progress. ::: ### Trigger the failover {#manual-failovers} You can trigger a failover manually using the Temporal CloudWeb UI or the tcld CLI, depending on your preference and setup. The following instructions outline the steps for each method: 1. Visit the [Namespace page](https://cloud.temporal.io/namespaces) on the Temporal Cloud Web UI. 1. Navigate to your Namespace details page and select the **Trigger a failover** option from the menu. 1. Confirm your action. After confirmation, Temporal initiates the failover. To manually trigger a failover, run the following command in your terminal: ``` tcld namespace failover \ --namespace . \ --region ``` If using API key authentication with the `--api-key` flag, you must add it directly after the tcld command and before `namespace failover`. Temporal fails over the primary to the replica. When you're ready to fail back, follow these failover instructions to move the primary back to the original. ### Post-failover event information {#info} After any failover, whether triggered by you or by Temporal, an event appears in both the [Temporal Cloud Web UI](https://cloud.temporal.io/namespaces) (on the Namespace detail page) and in your audit logs. The audit log entry for Failover uses the `"operation": "FailoverNamespace"` event. After failover, the replica becomes active, taking over in the isolation domain or region. You don't need to monitor Temporal Cloud's failover response in real time. Whenever there is a failover event, Temporal Cloud [notifies you via email](/cloud/notifications#admin-notifications) ### Returning to the primary with failbacks After Temporal-initiated failovers, Temporal Cloud shifts Workflow Execution processing back to the original region or isolation domain that was active before the incident once the incident is resolved. This is called a "failback". :::note To failback a manually-initiated failover, follow the [Manual Failover](#manual-failovers) directions to failover back to the original primary. ::: ## Disabling Temporal-initiated failovers {#disabling-temporal-initiated} When you add a replica to a Namespace, in the event of an incident or an outage Temporal Cloud automatically fails over the Namespace to its replica. _This is the recommended and default option._ However if you prefer to disable Temporal-initiated failovers and handle your own failovers, you can do so by following these instructions: 1. Navigate to the Namespace detail page in Temporal Cloud. 1. Choose the "Disable Temporal-initiated failovers" option. To disable Temporal-initiated failovers, run the following command in your terminal: ``` tcld namespace update-high-availability \ --namespace . \ --disable-auto-failover=true ``` If using API key authentication with the `--api-key` flag, you must add it directly after the tcld command and before `namespace update-high-availability` Temporal Cloud disables its health-check initiated failovers. To restore the default behavior, unselect the option in the WebUI or change `true` to `false` in the CLI command. ## Best practices: Workers and failovers {#worker} Enabling High Availability for Namespaces doesn't require specific Worker configuration. The process is invisible to the Workers. When a Namespace fails over to the replica, the DNS redirection orchestrated by Temporal ensures that your existing Workers continue to poll the Namespace without interruption. When a Namespace fails over to a replica in a different region, Workers will be communicating cross-region. - If your application can’t tolerate this latency, deploy a second set of Workers in the replica's region or opt for a replica in the same region: - In the case of a complete regional outage, Workers in the original region may fail alongside the original Namespace. To keep Workflows moving during this level of outage, deploy a second set of Workers to the secondary region. :::tip Temporal Cloud enforces a maximum connection lifetime of 5 minutes. This offers your Workers an opportunity to re-resolve the DNS. ::: ## Best practices: scheduled failover testing {#testing} Microservices and external dependencies will fail at some point. Testing failovers ensures your app can handle these failures effectively. Temporal recommends regular and periodic failover testing for mission-critical applications in production. By testing in non-emergency conditions, you verify that your app continues to function, even when parts of the infrastructure fail. :::tip Safety First If this is your first time performing a failover test, run it with a test-specific namespace and application. This helps you gain operational experience before applying it to your production environment. Practice runs help ensure the process runs smoothly during real incidents in production. ::: Failover testing (also known as ")" can: - **Validate replicated deployments**: In multi-region setups, failover testing ensures your app can run from another region when the primary region experiences outages. In standard setups, failover testing instead works with an isolation domain. This maintains high availability in mission-critical deployments. Manual testing confirms the failover mechanism works as expected, so your system handles incidents effectively. - **Assess replication lag**: In multi-region deployment, monitoring [replication lag](/cloud/high-availability/monitoring#replication-lag-metric) between regions is crucial. Check the lag before initiating a failover to avoid rolling back Workflow progress. This is less important when using isolation domains as failover is usually instantaneous. Manual testing helps you practice this critical step and understand its impact. - **Assess recovery time**: Manual testing helps you measure actual recovery time. You can check if it meets your expected Recovery Time Objective (RTO) of 20 minutes or less, as stated in the [High Availability Namespace SLA](/cloud/sla). - **Identify potential issues**: Failover testing uncovers problems not visible during normal operation. This includes issues like [backlogs and capacity planning](https://temporal.io/blog/workers-in-production#testing-failure-paths-2438) and how external dependencies behave during a failover event. - **Validate fault-oblivious programming**: Temporal uses a "fault-oblivious programming" model, where your app doesn’t need to explicitly handle many types of failures. Testing failovers ensures that this model works as expected in your app. - **Operational readiness**: Regular testing familiarizes your team with the failover process, improving their ability to handle real incidents when they arise. Testing failovers regularly ensures your Temporal-based applications remain resilient and reliable, even when infrastructure fails. --- ## Connectivity for High Availability :::tip Namespaces with High Availability features and AWS PrivateLink Proper networking configuration is required for failover to be transparent to clients and workers when using PrivateLink. This page describes how to configure routing for Namespaces with High Availability features on AWS PrivateLink. ::: To use AWS PrivateLink with High Availability features, you may need to: - Override the regional DNS zone. - Ensure network connectivity between the two regions. These instructions assume you already have the PrivateLink connections in place. If not, follow our [guide for creating AWS PrivateLink connections and configuring private DNS](/cloud/connectivity/aws-connectivity). ## Customer side solutions When using PrivateLink, you connect to Temporal Cloud through a VPC Endpoint, which uses addresses local to your network. Temporal treats each `region.` as a separate zone. This setup allows you to override the default zone, ensuring that traffic is routed internally for the regions you’re using. A Namespace's active region is reflected in the target of a CNAME record. For example, if the active region of a Namespace is AWS us-west-2, the DNS configuration would look like this: | ha-namespace.account-id.tmprl.cloud | CNAME | aws-us-west-2.region.tmprl.cloud | | ----------------------------------- | ----- | -------------------------------- | After a failover, the CNAME record will be updated to point to the failover region, for example: | ha-namespace.account-id.tmprl.cloud | CNAME | aws-us-east-1.region.tmprl.cloud | | ----------------------------------- | ----- | -------------------------------- | The Temporal domain did not change, but the CNAME updated from us-west-2 to us-east-1. ## Setting up the DNS override :::caution Private connectivity is not yet offered for GCP Multi-region Namespaces. ::: To set up the DNS override, configure specific regions to target the internal VPC Endpoint IP addresses. For example, you might set aws-us-west-1.region.tmprl.cloud to target 192.168.1.2. In AWS, this can be done using a Route 53 private hosted zone for `region.tmprl.cloud`. Link that private zone to the VPCs you use for Workers. When your Workers connect to the Namespace, they first resolve the `..` record. This points to `.region.tmprl.cloud`, which then resolves to your internal IP addresses. Consider how you’ll configure Workers for this setup. You can either have Workers run in both regions continuously or establish connectivity between regions using Transit Gateway or VPC Peering. This way, Workers can access the newly activated region once failover occurs. ## Available regions, PrivateLink endpoints, and DNS record overrides :::caution The `sa-east-1` region is not yet available for use with Multi-region Namespaces. Currently, it is the only region on the continent. ::: The following table lists the available Temporal regions, PrivateLink endpoints, and DNS record overrides: When using a Namespace with High Availability features, the Namespace's DNS record `..` points to a regional DNS record in the format `.region.`. Here, `` is the currently active region for your Namespace. During failover, Temporal Cloud changes the target of the Namespace DNS record from one region to another. Namespace DNS records are configured with a 15 second TTL. Any DNS cache should re-resolve the record within this time. As a rule of thumb, receiving an updated DNS record takes about twice (2x) the TTL. Clients should converge to the newly targeted region within, at most, a 30-second delay. --- ## High Availability Temporal Cloud's High Availability features use asynchronous across multiple to provide enhanced resilience and a 99.99% [SLA](/cloud/sla). When you enable High Availability features, Temporal deploys your primary and its in separate isolation domains, giving you control over the location of both. This redundancy, combined with capability, enhances availability during outages. ## Built-in reliability Even without High Availability features, Temporal Cloud provides robust reliability and a 99.9% contractual Service Level Agreement ([SLA](/cloud/sla)) guarantee against service errors. Each standard Temporal Namespace uses replication across three availability zones to ensure high availability. An availability zone is a part of the system where tasks or operations are handled and executed. This design helps manage workloads and ensure tasks are completed while improving resource use and reducing delays. Replication makes sure that any changes to Workflow state or History are saved in all three zones _before_ the Temporal Service acknowledges a change back to the Client. As a result, your standard Temporal Namespace stays operational even if one of its three zones becomes unavailable. This provides the basis of our 99.9% service level. ## High Availability features {#high-availability-features} :::tip Support, stability, and dependency info Same-region Replication and Multi-cloud Replication are in [Public Preview](/evaluate/development-production-features/release-stages#public-preview). Multi-region Replication is in [General Availability](/evaluate/development-production-features/release-stages#general-availability) ::: High Availability features provide three types of replication: | **Deployment** | **Description** | | --------------------------------------- | ---------------------------------------------------------- | | **Same‑region Replication** | Isolation domains are located within the same region. | | **Multi‑region Replication** | Isolation domains are located in separate regions. | | **Multi‑cloud Replication** | Isolation domains are located in separate cloud providers. | ### Same-region Replication Temporal replicates Namespaces across isolation domains within one region. This option is a good fit when your application is built for one region and you prefer to failover within that region. This provides a reliable failover mechanism while maintaining deployment simplicity. ### Multi-region Replication Temporal replicates Namespaces across regions, making sure Workflows and data are available even if a region fails. Asynchronous replication means changes aren’t immediately reflected in other regions but will sync over time, ensuring data integrity. This setup allows failovers between replicas without needing immediate consistency across regions. Replication across different regions enhances resilience and reliability. ### Multi-cloud Replication Temporal asynchronously replicates all Workflows (live and historical) and data to a Namespace in an entirely different cloud provider. If a provider outage, regional outage, service disruption, or network issue occurs, traffic automatically shifts to the replica. Replicated data is securely encrypted and transmitted across the public internet between cloud providers. Internet connectivity allows workers in one cloud to fail over to a replica in a different cloud. :::info When you adopt Temporal's High Availability features, don't forget to consider the reliability of your own workers, infrastructure, and dependencies. Issues like network outages, hardware failures, or misconfigurations in your own systems can affect your application performance. For the highest level of reliability, distribute your dependencies across regions, and use our Multi-region or Multi-cloud replication features. Using physically separated regions improves the fault tolerance of your application. ::: ## Service levels and recovery objectives Namespaces using High Availability have a 99.99% [uptime SLA](/cloud/sla) with sub-1-minute [RPO](/cloud/rpo-rto) and 20-minute [RTO](/cloud/rpo-rto). For detailed information: - [Service Level Agreement (SLA)](/cloud/sla) - [Recovery Point Objective (RPO) and Recovery Time Objective (RTO)](/cloud/rpo-rto) ## Failover High Availability Namespaces can automatically or manually [fail over](/cloud/high-availability/failovers) to the replica if the primary is unavailable or unhealthy. ## Target workloads High Availability Namespaces are a great solution for Workloads where an outage would cause: - Revenue loss - Poor customer experience - Problems stemming from policy/legal requirements that demand high availability These are often major concerns for financial services, e-commerce, gaming, global SaaS platforms, bookings & reservations, delivery & shipping, and order management. --- ## Monitoring High Availability Temporal Cloud offers several ways for you to track the health and performance of your [High Availability](/cloud/high-availability) namespaces. ## Replication status You can monitor your replica status with the Temporal Cloud UI. If the replica is unhealthy, Temporal Cloud disables the “Trigger a failover” option to prevent failing over to an unhealthy replica. An unhealthy replica might be due to: - **Data synchronization issues:** The replica fails to remain in sync with the primary due to network or performance problems. - **Replication lag:** The replica falls behind the primary, causing it to be out of sync. - **Network issues:** Loss of communication between the replica and the primary causes problems. - **Failed health checks:** If the replica fails health checks, it’s marked as unhealthy. These issues prevent the replica from being used during a failover, ensuring system stability and consistency. ## Replication lag metric Temporal Cloud’s High Availability features use asynchronous replication between the primary and the replica. Workflow updates in the primary, along with associated History Events, are transmitted to the replica. Replication lag refers to the transmission delay of Workflow updates and history events from the primary to the replica. :::tip Temporal Cloud strives to maintain a replication lag of less than 1 minute. In this context, P95 means 95% of updates are processed faster than this limit. ::: A forced failover, when there is significant replication lag, increases the likelihood of rolling back Workflow progress. Always check the replication lag metrics before initiating a failover. Temporal Cloud emits three replication lag-specific [metrics](/cloud/metrics/reference#replication-lag). The following samples demonstrate how you can use these metrics to monitor and explore replication lag: **P99 replication lag histogram**: ``` histogram_quantile(0.99, sum(rate(temporal_cloud_v0_replication_lag_bucket[$__rate_interval])) by (temporal_namespace, le)) ``` **Average replication lag**: ``` sum(rate(temporal_cloud_v0_replication_lag_sum[$__rate_interval])) by (temporal_namespace) / sum(rate(temporal_cloud_v0_replication_lag_count[$__rate_interval])) by (temporal_namespace) ``` When a Namespace is using a replica, you may notice that the Action count in `temporal_cloud_v0_total_action_count` is 2x what it was before adding a replica. This happens because Actions are replicated; they occur on both the primary and the replica. ## Failover audit log When Temporal triggers failovers, the [audit log](/cloud/audit-logs) will update with details. Look for `"operation": "FailoverNamespace"` in the logs. --- ## Temporal Cloud guide Welcome to the Temporal Cloud guide. In this guide you will find information about Temporal Cloud, onboarding, features, and how to use them. To create a Temporal Cloud account, sign up [here](https://temporal.io/get-cloud). **[Get started with Temporal Cloud.](/cloud/get-started)** ## Become familiar with Temporal Cloud - [Overview of Temporal Cloud](/cloud/overview) - [Security model](/cloud/security) - [Service availability](/cloud/service-availability) (availability, region support, throughput, latency, and limits) - [Account, Namespace, and application level configurations](/cloud/limits) - [Service Level Agreement (SLA)](/cloud/sla) - [Pricing](/cloud/pricing) - [Support](/cloud/support) ## Feature guides - [Get started with Temporal Cloud](/cloud/get-started) - [Manage certificates](/cloud/certificates) - [Manage API keys](/cloud/api-keys) - [Manage Namespaces](/cloud/namespaces) - [Manage users](/cloud/users) - [Manage user groups](/cloud/user-groups) - [Manage billing](/cloud/billing-and-cost) - [Manage Service Accounts](/cloud/service-accounts) - [API key feature guide](/cloud/api-keys) - [Metrics feature guide](/cloud/metrics) - [Temporal Nexus](/cloud/nexus) - [SAML authentication feature guide](/cloud/saml) - [Cloud Ops API](/ops) - [Audit logging feature guide](/cloud/audit-logs) - [`tcld` (Temporal Cloud command-line interface) reference](/cloud/tcld) --- ## Account Access Temporal Cloud offers several ways to manage access to your Temporal Cloud account. - [**Users**](/cloud/users) - Manage individual user accounts and permissions - [**User Groups**](/cloud/user-groups) - Organize users into groups for simplified access management - [**Service Accounts**](/cloud/service-accounts) - Configure service accounts for automated access - [**SAML**](/cloud/saml) - Configure SAML-based single sign-on integration - [**SCIM**](/cloud/scim) - Use your IDP to manage Temporal Cloud users and access via SCIM integration --- ## Datadog metrics setup Datadog in partnership with Temporal Cloud has created a native integration with Temporal Cloud metrics. This integration is available to all Datadog customers. Exporting cloud metrics to Datadog provides enhanced observability, allowing you to monitor, alert, and visualize key performance indicators of your applications and infrastructure. Temporal's integration with Datadog extends the monitoring capabilities of your Temporal Cloud deployment. Benefits of using this integration include: - Out-of-the-box Temporal Cloud metrics dashboard in Datadog - No need to run a service, Datadog connects directly to Temporal Cloud For detailed instructions on how to use the integration, see [the documentation on Datadog's site](https://docs.datadoghq.com/integrations/temporal_cloud/). --- ## General observability setup with metrics You will learn how to do the following: - [Configure an endpoint using the UI](#configure-via-ui) - [Configure an endpoint using tcld](#configure-via-cli-tcld) - [Query for metrics with a PromQL endpoint](#query-promql) ## Configure using the UI {#configure-via-ui} **How to configure a metrics endpoint using Temporal Cloud UI** :::note To view and manage third-party integration settings, your user account must have the Account Owner or Global Admin [role](/cloud/users#account-level-roles). ::: To assign a certificate and generate your metrics endpoint, follow these steps: 1. Log in to Temporal Cloud UI with an Account Owner or Global Admin [role](/cloud/users#account-level-roles). 2. Go to **Settings** and select **Observability**. 4. Add your root CA certificate (.pem) and save it. Note that if an observability endpoint is already set up, you can append your root CA certificate here to use the generated observability endpoint in your observability tool. 5. To test your endpoint, run the following command on your host: ``` curl -v --cert --key "/api/v1/query?query=temporal_cloud_v0_state_transition_count" ``` If you have Workflows running on a Namespace in your Temporal Cloud instance, you should see some data as a result of running this command. After the page refreshes, the new metrics endpoint appears below **Endpoint**, in the form `https://.tmprl.cloud/prometheus`. Use the endpoint to configure your observability tool. For example, if you use Grafana, see [Grafana data sources configuration](/cloud/metrics/prometheus-grafana#grafana-data-sources-configuration). You can also query via the [Prometheus HTTP API](https://prometheus.io/docs/prometheus/latest/querying/api/) at URLs like: ``` https://.tmprl.cloud/prometheus/api/v1/query?query=temporal_cloud_v0_state_transition_count ``` For example: ``` $ curl --cert client.pem --key client-key.pem "https://.tmprl.cloud/prometheus/api/v1/query?query=temporal_cloud_v0_state_transition_count" | jq . { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "temporal_cloud_v0_state_transition_count", "__rollup__": "true", "operation": "WorkflowContext", "temporal_account": "your-account", "temporal_namespace": "your-namespace.your-account-is", "temporal_service_type": "history" }, "value": [ 1672347471.2, "0" ] }, ... } ``` ## Configure endpoint using tcld {#configure-via-cli-tcld} **How to configure a metrics endpoint using the tcld CLI.** To add a certificate to a metrics endpoint, use [`tcld account metrics accepted-client-ca add`](/cloud/tcld/account#add). To enable a metrics endpoint, use [`tcld account metrics enable`](/cloud/tcld/account#enable). To disable a metrics endpoint, use [`tcld account metrics disable`](/cloud/tcld/account#disable). For more information, see [tcld account metrics command](/cloud/tcld/account#metrics). ## Query for metrics with a PromQL endpoint {#query-promql} Temporal Cloud emits metrics in a Prometheus-supported format. Prometheus is an open-source toolkit for alerting and monitoring. The Temporal Service exposes Cloud metrics with a [Prometheus HTTP API endpoint](https://prometheus.io/docs/prometheus/latest/querying/api/). Temporal Cloud metrics provide a compatible data source for visualizing, monitoring, and observability platforms like Grafana and Datadog. You can use functions like [rate](https://prometheus.io/docs/prometheus/latest/querying/functions/#rate) or [increase](https://prometheus.io/docs/prometheus/latest/querying/functions/#increase) to calculate the rate of increase for a Temporal Cloud metric: ``` rate(temporal_cloud_v0_frontend_service_request_count[$__rate_interval]) ``` Or you might use Prometheus to calculate average latencies or histogram quartiles: ``` --- # Average latency rate(temporal_cloud_v0_service_latency_sum[$__rate_interval]) / rate(temporal_cloud_v0_service_latency_count[$__rate_interval]) --- # Approximate 99th percentile latency broken down by operation histogram_quantile(0.99, sum(rate(temporal_cloud_v0_service_latency_bucket[$__rate_interval])) by (le, operation)) ``` Metrics are scraped every 30 seconds and exposed to the metrics endpoint with a 1-minute lag.\ The endpoint returns data with a 15-second resolution, which results in displaying the same value twice. Set up Grafana with Temporal Cloud observability to view metrics by creating or getting your Prometheus endpoint for Temporal Cloud metrics and enabling SDK metrics. --- ## Temporal Cloud Observability and Metrics Temporal offers two distinct sources of metrics: [Cloud/Server Metrics](/cloud/metrics/reference) and [SDK Metrics](/references/sdk-metrics). Each source provides options for levels of granularity and filtering, monitoring-tool integrations, and configuration. Before implementing Temporal Cloud observability, decide what you need to measure for your use case. There are two primary use cases for metrics: - To measure the health and performance of Temporal-backed applications and key business processes. - To measure the health and performance of Temporal infrastructure and user provided infrastructure in the form of Temporal Workers and Temporal Clients. When measuring the performance of Temporal-backed applications and key business processes, you should rely on Temporal SDK metrics as a source of truth. This is because Temporal SDKs provide visibility from the perspective of your application, not from the perspective of the Temporal Service. SDK metrics monitor individual workers and your code's behavior. Cloud metrics monitor Temporal behavior. When used together, Temporal Cloud and SDK metrics measure the health and performance of your full Temporal infrastructure, including the Temporal Cloud Service and user-supplied Temporal Workers. Cloud Metrics for all Namespaces in your account are available from two sources: - [OpenMetrics Endpoint](/cloud/metrics/openmetrics) - A Prometheus-compatible scrapable endpoint. - [PromQL Endpoint](/cloud/metrics/promql) - A Prometheus query endpoint. :::note OpenMetrics is the recommended option for most users. ::: --- ## OpenMetrics API Reference The Temporal Cloud OpenMetrics API provides actionable operational metrics about your Temporal Cloud deployment. This is a scrapable HTTP API that returns metrics in OpenMetrics format, suitable for ingestion by Prometheus-compatible monitoring systems. :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Cloud OpenMetrics support is available in [Public Preview](/evaluate/development-production-features/release-stages#public-preview). ::: ## Available Metrics Reference Metrics descriptions are also available programmatically via the `/v1/descriptors` endpoint. You can see the Metrics Reference for a list of available metrics. ## Authentication Temporal uses API keys for integrating with the OpenMetrics endpoint. Applications must be authorized and authenticated before they can access metrics from Temporal Cloud. An API key is owned by a Service Account and inherits the permissions granted to the owner. ### Creating API Keys API keys can be created using the [Temporal Cloud UI](https://cloud.temporal.io): 1. Navigate to Settings → Service Accounts 2. Create a service account with **"Metrics Read-Only"** Account Level Role 3. Generate an API key within the service account :::info See the [docs](https://docs.temporal.io/cloud/api-keys#serviceaccount-api-keys) for more details on generating API keys. ::: ### Using API Keys All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication will also fail. ```shell curl -H "Authorization: Bearer " https://metrics.temporal.io/v1/metrics ``` ## Object Model The object model for the Metrics API follows the [OpenMetrics](https://openmetrics.io/) standard. ### Metrics A metric is a numeric attribute measured at a specific point in time, labeled with contextual metadata gathered at the point of instrumentation. ### Metric Types All Temporal Cloud metrics are exposed as *gauges* in OpenMetrics format, but represent different measurement types: * **Rate metrics**: Pre-computed per-second rates with delta temporality (e.g., `temporal_cloud_v1_workflow_success_count` \- workflows completed per second) * **Value metrics**: Current or instantaneous values (e.g., `temporal_cloud_v1_approximate_backlog_count` \- current number of tasks in queue) The list of metrics and their labels are available via the [List Descriptors](/cloud/metrics/openmetrics/api-reference#list-metric-descriptors) endpoint or in the [Metrics Reference](/cloud/metrics/openmetrics/metrics-reference). ### Labels A label is a key-value attribute associated with a metric data point. Labels can be used to filter or aggregate metrics. Common labels include: * `temporal_namespace`: The Temporal namespace * `temporal_account`: The Temporal account * `region`: The cloud region where the metric originated * `temporal_workflow_type`: The workflow type (where applicable) * `temporal_task_queue`: The task queue name (where applicable) Each metric has its own set of applicable labels. See the Metrics Reference for complete details. ### Metric Family A [Metric Family](https://github.com/prometheus/OpenMetrics/blob/main/specification/OpenMetrics.md#metricfamily) may have zero or more metrics. The set of metrics returned will vary based on actual system activity. Metrics only appear in a Metric Family if they were reported during the aggregation window. ## Client Considerations ### Rate Limiting To protect the stability of the API and keep it available to all users, Temporal employs multiple safeguards. When a rate limit is breached, an HTTP `429 Too Many Requests` error is returned with the following headers: | Header | Description | | ----- | ----- | | `Retry-After` | The time in seconds until the rate limit window resets | #### Rate Limit Scopes :::note Rate limit scopes are subject to change. ::: | Scope | Limit | | ----- | ----- | | Account | 180 requests per hour | ### Response Completeness The `X-Completeness` header indicates whether the response contains all available data: * `complete`: The response contains all metrics requested * `limited`: Response truncated due to size limits (30k metric data points max). Use namespace or metric filtering to reduce the response size. * `unknown`: Completeness cannot be determined (possibly due to regional issues or timeouts). Clients are encouraged to retry. ### Retry Logic Implement retry logic in your client to gracefully handle transient API failures. Use exponential backoff with jitter to avoid retry storms with reasonable retry intervals to avoid reaching rate limits. ### Data Latency Metric data points are available for query within 2 minutes of their origination. This is in line with the freshest metrics [available from any major service provider](https://docs.datadoghq.com/integrations/guide/cloud-metric-delay/). This latency should be accounted for when setting up monitoring alerts. ## Endpoints :::info All endpoints are served from: `metrics.temporal.io` ::: ### Get Metrics `GET /v1/metrics` Returns metrics in OpenMetrics format suitable for scraping by Prometheus-compatible systems. #### Timestamp Offset To account for metric data latency, this endpoint returns metrics from the current timestamp minus a fixed offset. The current offset is 2 minutes rounded down to the start of the minute. To accommodate this offset, the timestamps in the response should be honored when importing the metrics. For example, in Prometheus this can be controlled using the `honor\_timestamps` flag. #### Query Parameters | Parameter | Type | Description | | ----- | ----- | ----- | | `namespaces` | string array | Filter to specific Namespaces. Supports wildcards (e.g., `production-*`) | | `metrics` | string array | Filter to specific metrics | #### Response Headers | Header | Description | | ----- | ----- | | `X-Completeness` | Indicates the response status: `complete`, `limited`, or `unknown` | | `Content-Type` | `application/openmetrics-text` | :::info Example Request: ```shell curl -H "Authorization: Bearer " \ "https://metrics.temporal.io/v1/metrics?namespaces=production-*" ``` Response: ``` --- # TYPE temporal_cloud_v1_workflow_success_count gauge --- # HELP temporal_cloud_v1_workflow_success_count The number of successful workflows per second temporal_cloud_v1_workflow_success_count{temporal_namespace="production",temporal_workflow_type="payment-processing",region="aws-us-west-2"} 42.0 1609459200000 temporal_cloud_v1_workflow_success_count{temporal_namespace="production",temporal_workflow_type="order-fulfillment",region="aws-us-west-2"} 128.0 1609459200000 --- # TYPE temporal_cloud_v1_approximate_backlog_count gauge --- # HELP temporal_cloud_v1_approximate_backlog_count Approximate number of tasks in a task queue temporal_cloud_v1_approximate_backlog_count{temporal_namespace="production",temporal_task_queue="critical-queue",task_type="workflow", region="aws-us-west-2"} 15.0 1609459200000 ``` ::: #### Summary of Best Practices * *Honor timestamps*: Set `honor_timestamps: true` in Prometheus * *Scrape interval*: Use 30 or 60 second intervals * *Timeout*: Set scrape timeout to 10 seconds for large responses * *Filtering*: Use query parameters to reduce response size ### List Metric Descriptors `GET /v1/descriptors` Lists all metric descriptors including metadata, data types, and available dimensions (a.k.a. labels). #### Query Parameters | Parameter | Type | Description | | ----- | ----- | ----- | | `limit` | integer | Page size (1-100, default: 100\) | | `offset` | integer | Page offset | :::info Example Request: ```shell curl -H "Authorization: Bearer " \ "https://metrics.temporal.io/v1/descriptors" ``` Response: ```json { "meta": { "pagination": { "total": 35, "limit": 100, "offset": 0 } }, "descriptors": [ { "name": "temporal_cloud_v1_workflow_success_count", "help": "The number of successful workflows per second", "dimensions": [ "temporal_namespace", "temporal_workflow_type", "temporal_task_queue", "region" ] } ] } ``` ::: ## Managing High Cardinality :::caution High-cardinality labels like `temporal_task_queue` and `temporal_workflow_type` can significantly increase metric volume and impact performance of your monitoring system. ::: ### Cardinality Estimation To estimate your metric cardinality and see if this is an issue: ``` Total series = Base metrics × Namespaces × Task queues × Workflow types ``` Example: * 6 workflow metrics with both labels * 10 namespaces * 50 task queues * 20 workflow types * \= 6 × 10 × 50 × 20 \= 60,000 time series :::note 60,000 time series in the above example results in exceeding the 30,000 data points per scrape limit. ::: If the cardinality is too high or you are hitting API limits, consider the following strategies. ### Filtering at Scrape Time You can isolate only the metrics/namespaces you need. For example, the following shows examples of filtering by modifying the `metrics_path.` ```shell --- # Only specific namespaces matching the wildcard pattern /v1/metrics?namespaces=production-* --- # Only specific metrics /v1/metrics?metrics=temporal_cloud_v1_workflow_success_count --- # Combined filtering /v1/metrics?namespaces=prod-*&metrics=temporal_cloud_v1_approximate_backlog_count ``` :::info In Prometheus, the `params` config can be set to match the same behavior as above. ```yaml scrape_configs: - job_name: 'temporal-cloud' ... static_configs: - targets: ['metrics.temporal.io'] metrics_path: '/v1/metrics' params: namespaces: ['prod-*'] metrics: ['temporal_cloud_v1_approximate_backlog_count'] ``` ::: ### Label Management #### Prometheus If using Prometheus, you can configure it to drop metrics with a specific label or even rename specific label values to reduce the cardinality. ```yaml metric_relabel_configs: --- # Consolidate non-critical task queues - source_labels: [temporal_task_queue] regex: '(critical-queue|payment-queue)' target_label: __tmp_keep_original replacement: 'true' - source_labels: [__tmp_keep_original] regex: '' target_label: temporal_task_queue replacement: 'other' - regex: '__tmp_keep_original' action: labeldrop ``` #### OpenTelemetry Collector To accomplish the same as Prometheus, a filter can be used in the collector along with any other processors. ``` processors: filter: metrics: include: match_type: regexp expressions: # Only keep metrics with critical-queue or payment-queue - Label("temporal_task_queue") == nil or IsMatch(Label("temporal_task_queue"), "^(critical-queue|payment-queue)$") ``` ### Monitoring Cardinality Cardinality can be monitored using this PromQL query. ```shell --- # Count the total number of series count({__name__=~"temporal_cloud_v1_.*"}) --- # Count the total number of series by metric count({__name__=~"temporal_cloud_v1_.*"}) by (__name__) ``` ## API Limits | Limit | Impact | Mitigation | | ----- | ----- | ----- | | 30k total datapoints per scrape | Response may be truncated | Use namespace/metric filtering | | 180 requests per account per hour | HTTP 429 returned | Set appropriate scrape interval of 30-60s | --- ## Temporal Cloud OpenMetrics :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Cloud OpenMetrics support is available in [Public Preview](/evaluate/development-production-features/release-stages#public-preview). ::: :::tip PRICING Future pricing may apply to high-volume usage that exceeds standard [limits](/cloud/metrics/openmetrics/api-reference#api-limits). ::: Temporal Cloud's [OpenMetrics](https://openmetrics.io/) endpoint provides operational metrics for your Temporal Cloud workloads in industry-standard Prometheus format, enabling comprehensive monitoring across Namespaces, Workflows, and Task Queues with your existing observability stack. ## Quick Links * [Integrations](/cloud/metrics/openmetrics/metrics-integrations) - Get started exporting metrics with common integrations * [API Documentation](/cloud/metrics/openmetrics/api-reference) - Endpoint specification and advanced configuration * [Metrics Reference](/cloud/metrics/openmetrics/metrics-reference) - Complete catalog of all metrics with descriptions and labels * [Migration Guide](/cloud/metrics/openmetrics/migration-guide) - Guide on how to transition from the Prometheus query endpoint ## Overview Temporal Cloud OpenMetrics exposes 30+ metrics covering workflow lifecycles, task queue operations, service performance, and system limits. All metrics are aggregated over one-minute windows and available for scraping within two minutes. * [Set up authentication and scraping](/cloud/metrics/openmetrics/api-reference#authentication) with the API documentation. * Browse the [complete metrics catalog](/cloud/metrics/openmetrics/metrics-reference) for descriptions and labels. * Teams using the query endpoint should review the [migration guide](/cloud/metrics/openmetrics/migration-guide). ## API key authentication Create a [service account](/cloud/metrics/openmetrics/migration-guide#create-an-api-key) with the "Metrics Read-Only" role, generate an API key, and start scraping immediately - no certificate rotation or distribution required. ## Global endpoint This is a single endpoint at `metrics.temporal.io` which serves all metrics across your entire account with API key authentication and standard HTTPS. ## Namespace and metric filtering You can use query parameters to enable selective scraping to manage data volume and costs, which support wildcards for flexible namespace selection and specific metric filtering. ## Dashboard templates Production-ready [Grafana dashboards](https://github.com/grafana/jsonnet-libs/blob/master/temporal-mixin/dashboards/temporal-overview.json) provide immediate visibility with pre-built queries and visualizations. --- ## Metrics Integrations Metrics can be exported from Temporal Cloud using the OpenMetrics endpoint. This document describes configuring integrations that have third party support or are based on open standards. This document is for basic configuration only. For advanced concepts such as label management and high cardinality scenarios see the [general API reference](/cloud/metrics/openmetrics/api-reference). :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Cloud OpenMetrics support is available in [Public Preview](/evaluate/development-production-features/release-stages#public-preview). ::: ## Integrations ### Grafana Cloud Grafana provides a serverless integration with the OpenMetrics endpoint for Grafana Cloud. This integration will scrape metrics, store them in Grafana Cloud, and provides a default dashboard for visualizing the metrics in Grafana Cloud. See the [integration page](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/integrations/integration-reference/integration-temporal/) for more details. ### ClickStack ClickHouse provides an integration with the OpenMetrics endpoint for ClickStack. This integration uses an OpenTelemetry collector to read from the OpenMetrics endpoint, ingest data into ClickHouse, and includes a default dashboard to visualize the data with HyperDX. See the [integration page](https://clickhouse.com/docs/use-cases/observability/clickstack/integrations/temporal-metrics) for more details. ### New Relic New Relic integrates with Temporal Cloud via the infrastructure agent using a flex integration that pull data from the OpenMetrics endpoint. See the [integration page](https://docs.newrelic.com/docs/infrastructure/host-integrations/host-integrations-list/temporal-cloud-integration/) for more details. ### Prometheus \+ Grafana Self hosted Prometheus can be used to scrape the OpenMetrics endpoint. 1. Add a new scrape job for the OpenMetrics endpoint with your [API key](/cloud/metrics/openmetrics/api-reference#creating-api-keys). ```yaml scrape_configs: - job_name: 'temporal-cloud' scrape_interval: 60s scrape_timeout: 30s honor_timestamps: true scheme: https authorization: type: Bearer credentials: '' static_configs: - targets: ['metrics.temporal.io'] metrics_path: '/v1/metrics' ``` 2. Import the [Grafana dashboard](https://github.com/grafana/jsonnet-libs/blob/master/temporal-mixin/dashboards/temporal-overview.json) and configure your Prometheus datasource. ### OpenTelemetry Collector Configuration Collect metrics with a self-hosted OpenTelemetry Collector to ingest into the system of your choosing. 1. Add a new prometheus receiver for the OpenMetrics endpoint with your [API key](/cloud/metrics/openmetrics/api-reference#creating-api-keys). ```yaml receivers: prometheus: config: scrape_configs: - job_name: 'temporal-cloud' scrape_interval: 60s scrape_timeout: 30s honor_timestamps: true scheme: https authorization: type: Bearer credentials_file: static_configs: - targets: ['metrics.temporal.io'] metrics_path: '/v1/metrics' processors: batch: exporters: otlphttp: endpoint: service: pipelines: metrics: receivers: [prometheus] processors: [batch] exporters: [otlphttp] ``` :::info Examples for these integrations and more are [here](https://github.com/temporal-community/cloud-metrics-scrape-examples). ::: --- ## OpenMetrics Metrics Reference This document describes all metrics available from the Temporal Cloud OpenMetrics endpoint. :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Cloud OpenMetrics support is available in [Public Preview](/evaluate/development-production-features/release-stages#public-preview). ::: ## Metric Conventions ### Metric Types All metrics are exposed as OpenMetrics gauges, but represent different measurement types: * *Rate Metrics*: per-second rate of the aggregated values * *Value Metrics*: The most recent aggregate value within a look-back window (e.g. backlogs, limits) * *Percentile Metrics*: Pre-calculated aggregated latency percentiles in seconds :::note All metrics are stored as 1 minute aggregates. ::: ### Common Labels All metrics include these base labels: | Label | Description | | ----- | ----- | | `temporal_namespace` | The Temporal namespace | | `temporal_account` | The Temporal account identifier | | `region` | Cloud region where the metric originated | ## Metrics Catalog ### Frontend Service Metrics #### temporal\_cloud\_v1\_service\_request\_count gRPC requests received per second. | Label | Description | | ----- | ----- | | `operation` | The name of the RPC operation | **Type**: Rate #### temporal\_cloud\_v1\_service\_request\_throttled\_count gRPC requests throttled per second. | Label | Description | | ----- | ----- | | `operation` | The name of the RPC operation | **Type**: Rate #### temporal\_cloud\_v1\_service\_error\_count gRPC errors per second. | Label | Description | | ----- | ----- | | `operation` | The name of the RPC operation | **Type**: Rate #### temporal\_cloud\_v1\_service\_pending\_requests The number of pollers that are waiting for a task. Use this to track against ``temporal_cloud_v1_poller_limit`` | Label | Description | | ----- | ----- | | `operation` | The name of the operation | **Type**: Value #### temporal\_cloud\_v1\_resource\_exhausted\_error\_count Resource exhaustion errors per second. This metric does not include throttling due to Namespace limits. | Label | Description | | ----- | ----- | | `operation` | The name of the operation | **Type**: Rate #### temporal\_cloud\_v1\_service\_latency\_p50 :::caution Avoid aggregating this metric across dimensions because the percentile won't be accurate. ::: The 50th percentile latency of service requests in seconds | Label | Description | | ----- | ----- | | `operation` | The name of the operation | **Type**: Latency #### temporal\_cloud\_v1\_service\_latency\_p95 :::caution Avoid aggregating this metric across dimensions because the percentile won't be accurate. ::: The 95th percentile latency of service requests in seconds | Label | Description | | ----- | ----- | | `operation` | The name of the operation | **Type**: Latency #### temporal\_cloud\_v1\_service\_latency\_p99 :::caution Avoid aggregating this metric across dimensions as the percentile won't be accurate. ::: The 99th percentile latency of service requests in seconds | Label | Description | | ----- | ----- | | `operation` | The name of the operation | **Type**: Latency ### Workflow Completion Metrics :::caution High Cardinality These metrics could have high cardinality depending on number of workflow types and task queues. ::: #### temporal\_cloud\_v1\_workflow\_success\_count Successful workflow completions per second. | Label | Description | | ----- | ----- | | `temporal_task_queue` | The task queue name | | `temporal_workflow_type` | The workflow type | **Type**: Rate #### temporal\_cloud\_v1\_workflow\_failed\_count Workflow failures per second. | Label | Description | | ----- | ----- | | `temporal_task_queue` | The task queue name | | `temporal_workflow_type` | The workflow type | **Type**: Rate #### temporal\_cloud\_v1\_workflow\_timeout\_count Workflow timeouts per second. | Label | Description | | ----- | ----- | | `temporal_task_queue` | The task queue name | | `temporal_workflow_type` | The workflow type | **Type**: Rate #### temporal\_cloud\_v1\_workflow\_cancel\_count Workflow cancellations per second. | Label | Description | | ----- | ----- | | `temporal_task_queue` | The task queue name | | `temporal_workflow_type` | The workflow type | **Type**: Rate #### temporal\_cloud\_v1\_workflow\_terminate\_count Workflow terminations per second. | Label | Description | | ----- | ----- | | `temporal_task_queue` | The task queue name | | `temporal_workflow_type` | The workflow type | **Type**: Rate #### temporal\_cloud\_v1\_workflow\_continued\_as\_new\_count Workflows continued as new per second. | Label | Description | | ----- | ----- | | `temporal_task_queue` | The task queue name | | `temporal_workflow_type` | The workflow type | **Type**: Rate ### Task Queue Metrics :::caution High Cardinality These metrics could have high cardinality depending on number of task queues present. ::: #### temporal\_cloud\_v1\_approximate\_backlog\_count The approximate number of tasks pending in a task queue. Started Activities are not included in the count as they have been dequeued from the task queue. | Label | Description | | ----- | ----- | | `temporal_task_queue` | The task queue name | | `task_type` | Type of task: `workflow` or `activity` | **Type**: Value #### temporal\_cloud\_v1\_poll\_success\_count Successfully matched tasks per second. | Label | Description | | ----- | ----- | | `operation` | The poll operation name | | `task_type` | Type of task: `workflow` or `activity` | | `temporal_task_queue` | The task queue name | **Type**: Rate #### temporal\_cloud\_v1\_poll\_success\_sync\_count Tasks matched synchronously per second (no polling wait). | Label | Description | | ----- | ----- | | `operation` | The poll operation name | | `task_type` | Type of task: `workflow` or `activity` | | `temporal_task_queue` | The task queue name | **Type**: Rate #### temporal\_cloud\_v1\_poll\_timeout\_count The rate of poll requests that timed out without receiving a task. | Label | Description | | ----- | ----- | | `operation` | The poll operation name | | `task_type` | Type of task: `workflow` or `activity` | | `temporal_task_queue` | The task queue name | **Type**: Rate #### temporal\_cloud\_v1\_no\_poller\_tasks\_count The rate of tasks added to queues with no active pollers. | Label | Description | | ----- | ----- | | `temporal_task_queue` | The task queue name | | `task_type` | Type of task: `workflow` or `activity` | **Type**: Rate ### Namespace Metrics #### temporal\_cloud\_v1\_namespace\_open\_workflows The current number of open workflows in a namespace. **Type**: Value #### temporal\_cloud\_v1\_total\_action\_count The total number of actions performed per second. Actions with `is_background=false` are counted toward the ``temporal_cloud_v1_action_limit``. | Label | Description | | ----- | ----- | | `is_background` | Whether the action was background: `true` or `false`. Background actions (e.g. History export) do not count toward the action rate limit | | `namespace_mode` | Indicates if actions are produced by an `active` or a `standby` Namespace | **Type**: Rate #### temporal\_cloud\_v1\_total\_action\_throttled\_count The total number of actions throttled per second. **Type**: Rate #### temporal\_cloud\_v1\_operations\_count Operations performed per second. | Label | Description | | ----- | ----- | | `operation` | The name of the operation | | `is_background` | Whether the operation was background: `true` or `false`. Background operations do not count toward the operation rate limit | | `namespace_mode` | Indicates if operations are produced by an `active` or a `standby` Namespace | **Type**: Rate #### temporal\_cloud\_v1\_operations\_throttled\_count Operations throttled due to rate limits per second. | Label | Description | | ----- | ----- | | `operation` | The name of the operation | | `is_background` | Whether the operation was background: `true` or `false`. Background operations do not count toward the operation rate limit | | `namespace_mode` | Indicates if actions are throttled in an `active` or a `standby` Namespace | **Type**: Rate ### Schedule Metrics #### temporal\_cloud\_v1\_schedule\_action\_success\_count Successfully executed scheduled workflows per second. **Type**: Rate #### temporal\_cloud\_v1\_schedule\_buffer\_overruns\_count The rate of schedule buffer overruns when using `BUFFER_ALL` overlap policy. **Type**: Rate #### temporal\_cloud\_v1\_schedule\_missed\_catchup\_window\_count The rate of missed schedule executions outside the catchup window. **Type**: Rate #### temporal\_cloud\_v1\_schedule\_rate\_limited\_count The rate of scheduled workflows delayed due to rate limiting. **Type**: Rate ### Replication Metrics #### temporal\_cloud\_v1\_replication\_lag\_p50 The 50th percentile cross-region replication lag in seconds. **Type**: Latency #### temporal\_cloud\_v1\_replication\_lag\_p95 The 95th percentile cross-region replication lag in seconds. **Type**: Latency #### temporal\_cloud\_v1\_replication\_lag\_p99 The 99th percentile cross-region replication lag in seconds. **Type**: Latency ### Limit Metrics #### temporal\_cloud\_v1\_operations\_limit The current configured operations per second limit for a namespace. **Type**: Value #### temporal\_cloud\_v1\_action\_limit The current configured actions per second limit for a namespace. Track utilization against this limit with ``temporal_cloud_v1_total_action_count`` and `is_background=false`. **Type**: Value #### temporal\_cloud\_v1\_service\_request\_limit The current configured frontend service RPS limit for a namespace. Track utilization against this limit with ``temporal_cloud_v1_service_request_count`` **Type**: Value #### temporal\_cloud\_v1\_poller\_limit The current configured poller limit for a namespace. Track utilization against this limit with ``temporal_cloud_v1_service_pending_requests``. **Type**: Value --- ## OpenMetrics Migration Guide Temporal Cloud is transitioning from our Prometheus query endpoint to an industry-standard OpenMetrics (Prometheus-compatible) endpoint for metrics collection. This migration represents a significant improvement in how you can monitor your Temporal Cloud workloads, bringing enhanced capabilities, better integration with observability tools, and access to high-cardinality metrics that were previously unavailable. :::tip SUPPORT, STABILITY, and DEPENDENCY INFO The OpenMetrics endpoint is available in [Public Preview](/evaluate/development-production-features/release-stages#public-preview) for testing and validation. The existing Prometheus query endpoint remains fully operational and supported. ::: ## Why We're Making This Change 1. **Industry-Standard Format**: Native compatibility with Prometheus and OpenTelemetry and all major observability platforms (Datadog, New Relic etc.) without custom integrations. 2. **High-Cardinality Metrics**: Access to previously unavailable dimensions including: - `temporal_task_queue` labels on multiple metrics - `temporal_workflow_type` labels for workflow-specific monitoring - New task queue backlog metrics for better operational visibility 3. **Accurate Percentiles**: Our new system provides accurate percentile calculations for latency metrics, even in the presence of substantial outliers, unlike Prometheus-style histograms. 4. **Simplified Integration**: Direct scraping from your observability tools without intermediate translation layers. 5. **Enhanced Performance**: Optimized for high-cardinality data with built-in safeguards for system stability. Data is available to scrape two minutes from the time it was emitted, in line with the freshest metrics [available from any major service provider](https://docs.datadoghq.com/integrations/guide/cloud-metric-delay/). ## What's Changing | Aspect | Current Query Endpoint | New OpenMetrics Endpoint | | ---------------------- | -------------------------------------------------- | ------------------------------------------- | | **Protocol** | Prometheus Query API (`/api/v1/query`) | OpenMetrics scrape endpoint (`/v1/metrics`) | | **Authentication** | mTLS certificates with customer-specific endpoints | API keys with global endpoint | | **Metric Temporality** | Cumulative counters | Delta temporality (pre-computed rates) | | **Query Requirement** | Direct queries supported | Requires observability platform | | **Cardinality** | Limited labels | High-cardinality labels available | | **Metric Naming** | `*_v0_*` metrics | `*_v1_*` metrics | ## Migration Timeline Here is the current estimated timeline for migrating from the Prometheus query endpoint to the OpenMetrics endpoint. :::caution Timelines can shift so be sure to stay up to date on upcoming releases. ::: **Public Preview (Current)** - OpenMetrics endpoint available for onboarding. - Both endpoints run in parallel with no changes required. **General Availability [TBA]**: - OpenMetrics endpoint becomes production-ready and the standard for metrics collection. **Query Endpoint Deprecation (6 months after GA)**: - Prometheus query endpoint deprecated and eventually removed. :::important Action Required Complete migration before the 6 month deprecation window ends. ::: ## Notable Differences ### 1\. No longer use `rate()` in Prometheus queries Metrics are now pre-computed as per-second rates with delta temporality. **Before (Prometheus query endpoint)**: ``` rate(temporal_cloud_v0_frontend_service_request_count[1m]) ``` **After (OpenMetrics endpoint)**: ``` temporal_cloud_v1_service_request_count ``` ### 2\. Functions that no longer apply Metrics from OpenMetrics are already rates, therefore certain Prometheus functions no longer make sense. Below is a non-exhaustive list of some of the functions: - ❌ `rate()` \- Already computed - ❌ `increase()` \- Increase of a rate is meaningless - ❌ `irate()` \- Instant rate not applicable - ❌ `histogram_quantile()` \- Not applicable (explicit percentiles provided instead) - ✅ `sum()`, `avg()`, `max()`, `min()` \- Still work normally ### 3\. Percentile metrics The new endpoint provides explicit percentile metrics (p50, p95, p99) rather than histogram buckets: **Before (Prometheus query endpoint)**: Calculate percentiles using `histogram_quantile()` ```shell histogram_quantile(0.95, rate(temporal_cloud_v0_service_latency_bucket[5m])) ``` **After (OpenMetrics endpoint)**: Use pre-calculated percentiles directly ``` temporal_cloud_v1_service_latency_p95 ``` **Important Tradeoff**: While pre-calculated percentiles are more accurate for individual time series, they _cannot be accurately aggregated_. For example: - ❌ Cannot sum or average p95 values across Namespaces to get a global p95 - ❌ Cannot aggregate p95 values across regions or Task Queues - ✅ Can still view individual namespace/task queue percentiles accurately - ✅ More accurate percentile calculations for individual series, especially with outliers ### 4\. Authentication Setup **Before**: mTLS certificates with customer-specific endpoint ```shell curl --cert /path/to/client.pem \ --key /path/to/client.key \ --cacert /path/to/ca.pem \ "https://.tmprl.cloud/api/v1/query?query=rate(temporal_cloud_v0_frontend_service_request_count[5m])&time=2025-01-15T10:00:00Z" ``` **After**: API key with global endpoint ```shell curl -H "Authorization: Bearer " https://metrics.temporal.io/v1/metrics ``` ## Migration Steps ### Create an API Key Create a service account within the Temporal Cloud UI settings with the “Metrics Read-Only” Account Level Role. :::note As this is an account-level role, scoping it to specific namespaces has no effect as it will have access to the full account’s metrics. ::: Once this is created, you can create an API key within this service account which will inherit the role. Save this API key in a secure location and use it to access the metrics APIs. To test that this works, curl the endpoint with your API Key. The output should resemble the following example: ```shell $ curl -H "Authorization: Bearer " https://metrics.temporal.io/v1/metrics --- # TYPE temporal_cloud_v1_service_error_count gauge --- # HELP temporal_cloud_v1_service_error_count The number of gRPC errors returned by frontend service --- # TYPE temporal_cloud_v1_service_pending_requests gauge --- # HELP temporal_cloud_v1_service_pending_requests The number of pollers that are waiting for a task --- # TYPE temporal_cloud_v1_service_request_count gauge --- # HELP temporal_cloud_v1_service_request_count The number of RPC requests received by the service.. ``` Now you are ready to scrape your metrics\! ### Configuring Grafana \+ Prometheus #### Update Prometheus Configuration Add a new scrape job for the OpenMetrics endpoint with your API key. ```yaml scrape_configs: - job_name: temporal-cloud static_configs: - targets: - 'metrics.temporal.io' scheme: https metrics_path: '/v1/metrics' honor_timestamps: true scrape_interval: 60s scrape_timeout: 30s authorization: type: Bearer credentials: 'API_KEY' ``` :::note This replaces the direct Grafana datasource configuration you used with the query endpoint. ::: #### Install New Dashboards - Download the new Grafana dashboard: [temporal_cloud_openmetrics.json](https://github.com/temporalio/dashboards/blob/master/cloud/temporal_cloud_openmetrics.json) - Import alongside existing dashboards during transition - Update any custom alerts and queries to use new metrics and remove `rate()` functions ### Configuring Datadog :::tip Automated integration update coming soon. ::: The Datadog team is working on updating the official Temporal Cloud integration to use the new endpoint. This transition should be largely transparent for most users. For users that want to get started immediately, Temporal Cloud metrics can be directly integrated into Datadog by configuring the Datadog agent to scrape the OpenMetrics endpoint. An example for that lives [here](https://github.com/temporal-community/cloud-metrics-scrape-examples/tree/main/datadog/openmetrics). #### Other Observability Providers Consult the documentation for your observability system for how to configure it to scrape this endpoint and retrieve your metrics: - [NewRelic](https://docs.newrelic.com/docs/infrastructure/prometheus-integrations/install-configure-openmetrics/configure-prometheus-openmetrics-integrations/) - [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/configuration/#receivers) Examples for all these integrations live [here](https://github.com/temporal-community/cloud-metrics-scrape-examples). ### Metric Mapping Reference Below is a template for mapping metrics from the old query endpoint to the new OpenMetrics endpoint. Note that all metrics follow the pattern of `v0` → `v1` version change, and the fundamental difference is the shift from cumulative counters to pre-computed rates for the majority of the metrics. Note that the labels below are only new labels added to the metrics. For the complete list of labels, see the /cloud/metrics/openmetrics/metrics-reference. #### Frontend Service Metrics | Old Metric (v0) | New Metric (v1) | New Labels | | -------------------------------------------------- | -------------------------------------------------- | ---------- | | `temporal_cloud_v0_frontend_service_error_count` | `temporal_cloud_v1_service_error_count` | `region` | | `temporal_cloud_v0_frontend_service_request_count` | `temporal_cloud_v1_service_request_count` | `region` | | `temporal_cloud_v0_resource_exhausted_error_count` | `temporal_cloud_v1_resource_exhausted_error_count` | `region` | | `temporal_cloud_v0_total_action_count` | `temporal_cloud_v1_total_action_count` | `region` | #### Workflow Metrics | Old Metric (v0) | New Metric (v1) | New Labels | | --------------------------------------------------- | --------------------------------------------------- | ------------------------------------------------------- | | `temporal_cloud_v0_workflow_cancel_count` | `temporal_cloud_v1_workflow_cancel_count` | `region` `temporal_workflow_type` `temporal_task_queue` | | `temporal_cloud_v0_workflow_continued_as_new_count` | `temporal_cloud_v1_workflow_continued_as_new_count` | `region` `temporal_workflow_type` `temporal_task_queue` | | `temporal_cloud_v0_workflow_failed_count` | `temporal_cloud_v1_workflow_failed_count` | `region` `temporal_workflow_type` `temporal_task_queue` | | `temporal_cloud_v0_workflow_success_count` | `temporal_cloud_v1_workflow_success_count` | `region` `temporal_workflow_type` `temporal_task_queue` | | `temporal_cloud_v0_workflow_terminate_count` | `temporal_cloud_v1_workflow_terminate_count` | `region` `temporal_workflow_type` `temporal_task_queue` | | `temporal_cloud_v0_workflow_timeout_count` | `temporal_cloud_v1_workflow_timeout_count` | `region` `temporal_workflow_type` `temporal_task_queue` | #### Poll Metrics | Old Metric (v0) | New Metric (v1) | New Labels | | ------------------------------------------- | ------------------------------------------- | ------------------------------ | | `temporal_cloud_v0_poll_success_count` | `temporal_cloud_v1_poll_success_count` | `region` `temporal_task_queue` | | `temporal_cloud_v0_poll_success_sync_count` | `temporal_cloud_v1_poll_success_sync_count` | `region` `temporal_task_queue` | | `temporal_cloud_v0_poll_timeout_count` | `temporal_cloud_v1_poll_timeout_count` | `region` `temporal_task_queue` | #### Latency Metrics | Old Metric (v0) | New Metric (v1) | New Labels | | ------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------- | ---------- | | `temporal_cloud_v0_service_latency_bucket temporal_cloud_v0_service_latency_count temporal_cloud_v0_service_latency_sum` | `temporal_cloud_v1_service_latency_p99 temporal_cloud_v1_service_latency_p95 temporal_cloud_v1_service_latency_p50` | `region` | | `temporal_cloud_v0_replication_lag_bucket temporal_cloud_v0_replication_lag_count temporal_cloud_v0_replication_lag_sum` | `temporal_cloud_v1_replication_lag_p99 temporal_cloud_v1_replication_lag_p95 temporal_cloud_v1_replication_lag_p50` | `region` | #### Schedule Metrics | Old Metric (v0) | New Metric (v1) | New Labels | | -------------------------------------------------------- | -------------------------------------------------------- | ---------- | | `temporal_cloud_v0_schedule_action_success_count` | `temporal_cloud_v1_schedule_action_success_count` | `region` | | `temporal_cloud_v0_schedule_buffer_overruns_count` | `temporal_cloud_v1_schedule_buffer_overruns_count` | `region` | | `temporal_cloud_v0_schedule_missed_catchup_window_count` | `temporal_cloud_v1_schedule_missed_catchup_window_count` | `region` | | `temporal_cloud_v0_schedule_rate_limited_count` | `temporal_cloud_v1_schedule_rate_limited_count` | `region` | In addition to these metrics, there are a number of new metrics provided by our OpenMetrics endpoint. :::info See the [metrics reference](/cloud/metrics/openmetrics/metrics-reference) for an up-to-date list of all available metrics and their full descriptions. ::: ### Managing High-Cardinality The new endpoint provides access to high-cardinality labels that can significantly increase your metric volume: #### High-Cardinality Labels - `temporal_task_queue` - `temporal_workflow_type` #### Best Practices ##### Namespace/Metric filtering Namespace filtering can be used to ensure that metrics are scraped for relevant Namespaces, which reduces cardinality. ``` https://metrics.temporal.io/v1/metrics?namespaces=production-* ``` This can be taken further by only scraping relevant metrics for a given namespace which ensures that any new high cardinality metrics won’t be an issue for your observability system. ``` https://metrics.temporal.io/v1/metrics?metrics=temporal_cloud_v1_workflow_success_count?namespaces=production-* ``` ##### Relabeling If the above doesn’t work, consider dropping problematic labels post-scrape but pre-ingestion into your observability system. For example, in Prometheus this can be done via [relabeling rules](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config). ```yaml metric_relabel_configs: - source_labels: [__name__] regex: 'temporal_cloud_v1_poll_success_count' action: labeldrop regex: 'temporal_task_queue' ``` Or you can even relabel certain label values in order to keep significant ones. For example, it’s possible to rename less important task queues to “unknown” while retaining important ones. ```yaml metric_relabel_configs: - source_labels: [temporal_task_queue] regex: '(critical-queue|payment-queue)' target_label: __tmp_keep_original replacement: 'true' # For anything without the keep flag, replace with "unknown" - source_labels: [__tmp_keep_original] regex: '' # empty/missing value target_label: temporal_task_queue replacement: 'unknown' # Clean up the temporary label - regex: '__tmp_keep_original' action: labeldrop ``` ## Limits See [API limits](/cloud/metrics/openmetrics/api-reference#api-limits) for details. ## FAQ ### Q: Will metrics match between promQL and OpenMetrics endpoints? No. The metrics will be approximately the same but due to aggregation differences and windowing, values likely won't match exactly between the two endpoints. Some metrics may be consistently different such as `temporal_cloud_v1_total_action_count` which includes History Export actions in the OpenMetrics endpoint. In the case of consistent differences the OpenMetrics endpoint is considered to be more accurate. ### Q: Can I still query metrics directly (e.g. with a Grafana dashboard)? Currently, the OpenMetrics endpoint requires an observability platform to collect and query metrics. Direct querying via API to return a time series of data is not supported. Supporting this type of query pattern is a future roadmap item. ### Q: What happens to my existing dashboards and alerts? During the transition period, both endpoints remain active. ### Q: Will historical data be preserved? Historical data from the query endpoint will remain in your observability platform. To maintain continuity: - Combine old (`v0`) and new (`v1`) metrics in your queries during transition - Consider using the PromQL `or` operator: `metric_v1 or metric_v0` ### Q: Are there limits to how frequently I can scrape or how much data will be returned? The limits are documented [here](/cloud/metrics/openmetrics/api-reference#api-limits). ### Q: Why are some metrics missing from my scrapes? I don’t see all the metrics documented. The OpenMetrics endpoint only returns metrics that were generated during the one-minute aggregation window. This is different from the query endpoint which might return zeros. **What this means:** - If no workflows failed in the last minute, `temporal_cloud_v1_workflow_failed_count` won't appear in that scrape. - If a specific task queue had no activity, its metrics will be absent. - The set of metrics returned varies between scrapes based on system activity. **This is normal behavior.** Unlike some metrics systems that populate zeros, the OpenMetrics endpoint follows a sparse reporting pattern \- metrics only appear when there's actual data to report. **How to handle this in queries:** ``` (temporal_cloud_v1_workflow_failed_count{namespace="production"} or vector(0)) ``` This ensures your dashboards and alerts work correctly even when metrics are temporarily absent due to no activity. --- ## Prometheus Grafana setup **How to set up Grafana with Temporal Cloud observability to view metrics.** Temporal Cloud and SDKs generate metrics for monitoring performance and troubleshooting errors. Temporal Cloud emits metrics through a [Prometheus HTTP API endpoint](https://prometheus.io/docs/prometheus/latest/querying/api/), which can be directly used as a Prometheus data source in Grafana or to query and export Cloud metrics to any observability platform. The open-source SDKs require you to set up a Prometheus scrape endpoint for Prometheus to collect and aggregate the Worker and Client metrics. This section describes how to set up your Temporal Cloud and SDK metrics and use them as data sources in Grafana. The process for setting up observability includes the following steps: 1. Create or get your Prometheus endpoint for Temporal Cloud metrics and enable SDK metrics. - For Temporal Cloud, [generate a Prometheus HTTP API endpoint](/cloud/metrics/general-setup) on Temporal Cloud using valid certificates. - For SDKs, [expose a metrics endpoint](#sdk-metrics-setup) where Prometheus can scrape SDK metrics and [run Prometheus](#prometheus-configuration) on your host. The examples in this article describe running Prometheus on your local machine where you run your application code. 2. Run Grafana and [set up data sources for Temporal Cloud and SDK metrics](#grafana-data-sources-configuration) in Grafana. The examples in this article describe running Grafana on your local host where you run your application code. 3. [Create dashboards](#grafana-dashboards-setup) in Grafana to view Temporal Cloud metrics and SDK metrics. Temporal provides [sample community-driven Grafana dashboards](https://github.com/temporalio/dashboards) for Cloud and SDK metrics that you can use and customize according to your requirements. If you're following through with the examples provided here, ensure that you have the following: - Root CA certificates and end-entity certificates. See [Certificate requirements](/cloud/certificates#certificate-requirements) for details. - Set up your connections to Temporal Cloud using an SDK of your choice and have some Workflows running on Temporal Cloud. See Connect to a Temporal Service for details. - [Go](/develop/go/temporal-client#connect-to-temporal-cloud) - [Java](/develop/java/temporal-client#connect-to-temporal-cloud) - [PHP](/develop/php/temporal-client#connect-to-a-dev-cluster) - [Python](/develop/python/temporal-client#connect-to-temporal-cloud) - [TypeScript](/develop/typescript/core-application#connect-to-temporal-cloud) - [.NET](/develop/dotnet/temporal-client#connect-to-temporal-cloud) - Prometheus and Grafana installed. ## Temporal Cloud metrics setup Before you set up your Temporal Cloud metrics, ensure that you have the following: - Account Owner or Global Admin [role privileges](/cloud/users#account-level-roles) for the Temporal Cloud account. - [CA certificate and key](/cloud/certificates) for the Observability integration. You will need the certificate to set up the Observability endpoint in Temporal Cloud. The following steps describe how to set up Observability on Temporal Cloud to generate an endpoint: 1. Log in to Temporal Cloud UI with an Account Owner or Global Admin [role](/cloud/users#account-level-roles). 2. Go to **Settings** and select **Integrations**. 3. Select **Configure Observability** (if you're setting it up for the first time) or click **Edit** in the Observability section (if it was already configured before). 4. Add your root CA certificate (.pem) and save it. Note that if an observability endpoint is already set up, you can append your root CA certificate here to use the generated observability endpoint with your instance of Grafana. 5. To test your endpoint, run the following command on your host: ``` curl -v --cert --key "/api/v1/query?query=temporal_cloud_v0_state_transition_count" ``` If you have Workflows running on a Namespace in your Temporal Cloud instance, you should see some data as a result of running this command. 6. Copy the HTTP API endpoint that is generated (it is shown in the UI). This endpoint should be configured as a data source for Temporal Cloud metrics in Grafana. See [Data sources configuration for Temporal Cloud and SDK metrics in Grafana](#grafana-data-sources-configuration) for details. ## SDK metrics setup SDK metrics are emitted by SDK Clients used to start your Workers and to start, signal, or query your Workflow Executions. You must configure a Prometheus scrape endpoint for Prometheus to collect and aggregate your SDK metrics. Each language development guide has details on how to set this up. - [Go SDK](/develop/go/observability#metrics) - [Java SDK](/develop/java/observability#metrics) - [TypeScript SDK](/develop/typescript/observability#metrics) - [Python](/develop/python/observability#metrics) - [.NET](/develop/dotnet/observability#metrics) The following example uses the Java SDK to set the Prometheus registry and Micrometer stats reporter, set the scope, and expose an endpoint from which Prometheus can scrape the SDK metrics. ```java //You need the following packages to set up metrics in Java. //See the Developer's guide for packages required for other SDKs. //… //… { // See the Micrometer documentation for configuration details on other supported monitoring systems. // Set up the Prometheus registry. PrometheusMeterRegistry yourRegistry = new PrometheusMeterRegistry(PrometheusConfig.DEFAULT); public static Scope yourScope(){ //Set up a scope, report every 10 seconds Scope yourScope = new RootScopeBuilder() .tags(ImmutableMap.of( "customtag1", "customvalue1", "customtag2", "customvalue2")) .reporter(new MicrometerClientStatsReporter(yourRegistry)) .reportEvery(Duration.ofSeconds(10)); //Start Prometheus scrape endpoint at port 8077 on your local host HttpServer scrapeEndpoint = startPrometheusScrapeEndpoint(yourRegistry, 8077); return yourScope; } /** * Starts HttpServer to expose a scrape endpoint. See * https://micrometer.io/docs/registry/prometheus for more info. */ public static HttpServer startPrometheusScrapeEndpoint( PrometheusMeterRegistry yourRegistry, int port) { try { HttpServer server = HttpServer.create(new InetSocketAddress(port), 0); server.createContext( "/metrics", httpExchange -> { String response = registry.scrape(); httpExchange.sendResponseHeaders(200, response.getBytes(UTF_8).length); try (OutputStream os = httpExchange.getResponseBody()) { os.write(response.getBytes(UTF_8)); } }); server.start(); return server; } catch (IOException e) { throw new RuntimeException(e); } } } //… // With your scrape endpoint configured, set the metrics scope in your Workflow service stub and // use it to create a Client to start your Workers and Workflow Executions. //… { //Create Workflow service stubs to connect to the Frontend Service. WorkflowServiceStubs service = WorkflowServiceStubs.newServiceStubs( WorkflowServiceStubsOptions.newBuilder() .setMetricsScope(yourScope()) //set the metrics scope for the WorkflowServiceStubs .build()); //Create a Workflow service client, which can be used to start, signal, and query Workflow Executions. WorkflowClient yourClient = WorkflowClient.newInstance(service, WorkflowClientOptions.newBuilder().build()); } //… ``` To check whether your scrape endpoints are emitting metrics, run your code and go to [http://localhost:8077/metrics](http://localhost:8077/metrics) to verify that you see the SDK metrics. You can set up separate scrape endpoints in your Clients that you use to start your Workers and Workflow Executions. For more examples on setting metrics endpoints in other SDKs, see the metrics samples: - [Java SDK Samples](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/metrics) - [Go SDK Samples](https://github.com/temporalio/samples-go/tree/main/metrics) ## SDK metrics Prometheus Configuration {#prometheus-configuration} **How to configure Prometheus to ingest Temporal SDK metrics.** For Temporal SDKs, you must have Prometheus running and configured to listen on the scrape endpoints exposed in your application code. For this example, you can run Prometheus locally or as a Docker container. In either case, ensure that you set the listen targets to the ports where you expose your scrape endpoints. When you run Prometheus locally, set your target address to port 8077 in your Prometheus configuration YAML file. (We set the scrape endopint to port 8077 in the [SDK metrics setup](#sdk-metrics-setup) example.) Example: ```yaml global: scrape_interval: 10s # Set the scrape interval to every 10 seconds. Default is every 1 minute. #... --- # Set your scrape configuration targets to the ports exposed on your endpoints in the SDK. scrape_configs: - job_name: 'temporalsdkmetrics' metrics_path: /metrics scheme: http static_configs: - targets: # This is the scrape endpoint where Prometheus listens for SDK metrics. - localhost:8077 # You can have multiple targets here, provided they are set up in your application code. ``` See the [Prometheus documentation](https://prometheus.io/docs/introduction/first_steps/) for more details on how you can run Prometheus locally or using Docker. Note that Temporal Cloud exposes metrics through a [Prometheus HTTP API endpoint](https://prometheus.io/docs/prometheus/latest/querying/api/) (not a scrape endpoint) that can be configured as a data source in Grafana. The Prometheus configuration described here is for scraping metrics data on endpoints for SDK metrics only. To check whether Prometheus is receiving metrics from your SDK target, go to [http://localhost:9090](http://localhost:9090) and navigate to **Status > Targets**. The status of your target endpoint defined in your configuration appears here. ## Grafana data sources configuration {#grafana-data-sources-configuration} **How to configure data sources for Temporal Cloud and SDK metrics in Grafana.** Depending on how you use Grafana, you can either install and run it locally, run it as a Docker container, or log in to Grafana Cloud to set up your data sources. If you have installed and are running Grafana locally, go to [http://localhost:3000](http://localhost:3000) and sign in. You must configure your Temporal Cloud and SDK metrics data sources separately in Grafana. To add the Temporal Cloud Prometheus HTTP API endpoint that we generated in the [Temporal Cloud metrics setup](/cloud/metrics/general-setup) section, do the following: 1. Go to **Configuration > Data sources**. 1. Select **Add data source > Prometheus**. 1. Enter a name for your Temporal Cloud metrics data source, such as _Temporal Cloud metrics_. 1. In the **Connection** section, paste the URL that was generated in the Observability section on the Temporal Cloud UI. 1. The **Authentication** section may be left as **No Authentication**. 1. In the **TLS Settings** section, select **TLS Client Authentication**: - Leave **ServerName** blank. This is not required. - Paste in your end-entity certificate and key. - Note that the end-entity certificate used here must be part of the certificate chain with the root CA certificates used in your [Temporal Cloud observability setup](/cloud/metrics/general-setup). 1. Click **Save and test** to verify that the data source is working. If you see issues in setting this data source, verify your CA certificate chain and ensure that you are setting the correct certificates in your Temporal Cloud observability setup and in the TLS authentication in Grafana. To add the SDK metrics Prometheus endpoint that we configured in the [SDK metrics setup](#sdk-metrics-setup) and [Prometheus configuration for SDK metrics](#prometheus-configuration) sections, do the following: 1. Go to **Configuration > Data sources**. 2. Select **Add data source > Prometheus**. 3. Enter a name for your Temporal Cloud metrics data source, such as _Temporal SDK metrics_. 4. In the **HTTP** section, enter your Prometheus endpoint in the URL field. If running Prometheus locally as described in the examples in this article, enter `http://localhost:9090`. 5. For this example, enable **Skip TLS Verify** in the **Auth** section. 6. Click **Save and test** to verify that the data source is working. If you see issues in setting this data source, check whether the endpoints set in your SDKs are showing metrics. If you don't see your SDK metrics at the scrape endpoints defined, check whether your Workers and Workflow Executions are running. If you see metrics on the scrape endpoints, but Prometheus shows your targets are down, then there is an issue with connecting to the targets set in your SDKs. Verify your Prometheus configuration and restart Prometheus. If you're running Grafana as a container, you can set your SDK metrics Prometheus data source in your Grafana configuration. See the example Grafana configuration described in the [Prometheus and Grafana setup for open-source Temporal Service](/self-hosted-guide/monitoring#grafana) article. ### Grafana dashboards setup To set up dashboards in Grafana, you can use the UI or configure them directly in your Grafana deployment. :::tip Temporal provides community-driven example dashboards for [Temporal Cloud](https://github.com/temporalio/dashboards/tree/master/cloud) and [Temporal SDKs](https://github.com/temporalio/dashboards/tree/master/sdk) that you can customize to meet your needs. ::: To import a dashboard in Grafana: 1. In the left-hand navigation bar, select **Dashboards** > **Import dashboard**. 2. You can either copy and paste the JSON from the [Temporal Cloud](https://github.com/temporalio/dashboards/tree/master/cloud) and [Temporal SDK](https://github.com/temporalio/dashboards/tree/master/sdk) sample dashboards, or import the JSON files into Grafana. 3. Save the dashboard and review the metrics data in the graphs. To configure dashboards with the UI: 1. Go to **Create > Dashboard** and add an empty panel. 2. On the **Panel configuration** page, in the **Query** tab, select the "Temporal Cloud metrics" or "Temporal SDK metrics" data source that you configured earlier. If you need to add multiple queries from both data sources, choose `–Mixed–`. 3. Add your metrics queries: - For Temporal Cloud metrics, expand the **Metrics browser** and select the metrics you want. You can also select associated labels and values to sort the query data. The [Cloud metrics documentation](/cloud/metrics/reference) lists all metrics emitted from Temporal Cloud. - For Temporal SDK metrics, expand the **Metrics browser** and select the metrics you want. A list of Worker performance metrics is described in the [Developer's Guide - Worker performance](/develop/worker-performance). All SDK-related metrics are listed in the [SDK metrics](/references/sdk-metrics) reference. 4. The graph should now display data based on your selected queries. Note that SDK metrics will only show if you have Workflow Execution data and running Workers. If you don't see SDK metrics, run your Worker and Workflow Executions, then monitor the dashboard. --- ## PromQL Metrics :::tip Need to scrape metrics into your observability stack? Try out the new [OpenMetrics endpoint](/cloud/metrics/openmetrics). ::: Metrics for all Namespaces in your account are available from your metrics endpoint. Keep in mind that your Temporal Cloud metrics lag real-time performance by about one minute. Temporal Cloud also only retains raw metrics for seven days. To ensure security of your metrics, a CA certificate dedicated to observability is required. Only clients that use certificates signed by that CA, or that chain up to the CA, can query the metrics endpoint. For more information about CA certificates in Temporal Cloud, see [Certificate requirements](/cloud/certificates#certificate-requirements). - [General setup](/cloud/metrics/general-setup) - [Available metrics](/cloud/metrics/reference) - [Prometheus & Grafana setup](/cloud/metrics/prometheus-grafana) - [Datadog setup](/cloud/metrics/datadog) --- ## Temporal Cloud metrics reference A metric is a measurement or data point that provides insights into the performance and health of a system. This document describes the metrics available on the Temporal Cloud platform. Temporal Cloud metrics help you monitor performance and troubleshoot errors. They provide insights into different aspects of the Service. This document describes: - **[Available Temporal Cloud metrics](#available-metrics)**: The metrics emitted by Temporal Cloud include counts of gRPC errors, requests, successful task matches to a poller, and more. - **[Metrics labels](#metrics-labels)**: Temporal Cloud metrics labels can filter metrics and help categorize and differentiate results. - **[Operations](#metrics-operations)**: An operation is a special type of label that categorizes the type of operation being performed when the metric was collected. :::info SDK METRICS This document discusses metrics emitted by [Temporal Cloud](/cloud). Temporal SDKs also emit metrics, sourced from Temporal Clients and Worker processes. You can find information about Temporal SDK metrics on its [dedicated page](/references/sdk-metrics). Please note: - SDK metrics start with the phrase `temporal_`. - Temporal Cloud metrics start with `temporal_cloud_`. ::: ## Available Temporal Cloud metrics {#available-metrics} **What metrics are emitted from Temporal Cloud?** The following metrics are emitted for your Namespaces: ### Frontend Service metrics {#frontend} #### temporal_cloud_v0_frontend_service_error_count This is a count of gRPC errors returned aggregated by operation. Labels: temporal_account, temporal_namespace, operation, temporal_service_type #### temporal_cloud_v0_frontend_service_request_count This is a count of gRPC requests received aggregated by operation. Labels: temporal_account, temporal_namespace, operation, temporal_service_type #### temporal_cloud_v0_resource_exhausted_error_count gRPC requests received that were rate-limited by Temporal Cloud, aggregated by cause. Labels: temporal_account, temporal_namespace, resource_exhausted_cause #### temporal_cloud_v0_state_transition_count Count of state transitions for each Namespace. #### temporal_cloud_v0_total_action_count Approximate count of Temporal Cloud Actions. Labels: temporal_account, temporal_namespace, is_background, namespace_mode ### Poll metrics {#poll} #### temporal_cloud_v0_poll_success_count Tasks that are successfully matched to a poller. Labels: temporal_account, temporal_namespace, operation, task_type, temporal_service_type #### temporal_cloud_v0_poll_success_sync_count Tasks that are successfully sync matched to a poller. Labels: temporal_account, temporal_namespace, operation, task_type, temporal_service_type #### temporal_cloud_v0_poll_timeout_count When no tasks are available for a poller before timing out. Labels: temporal_account, temporal_namespace, operation, task_type, temporal_service_type ### Replication lag metrics {#replication-lag} #### temporal_cloud_v0_replication_lag_bucket A histogram of [replication lag](/cloud/high-availability/monitoring#replication-lag-metric) during a specific time interval for a Namespace with high availability. Labels: temporal_account, temporal_namespace, le #### temporal_cloud_v0_replication_lag_count The [replication lag](/cloud/high-availability/monitoring#replication-lag-metric) count during a specific time interval for a Namespace with high availability. Labels: temporal_account, temporal_namespace #### temporal_cloud_v0_replication_lag_sum The sum of [replication lag](/cloud/high-availability/monitoring#replication-lag-metric) during a specific time interval for a Namespace with high availability. Labels: temporal_account, temporal_namespace ### Schedule metrics {#schedule} #### temporal_cloud_v0_schedule_action_success_count Successful execution of a Scheduled Workflow. Labels: temporal_account, temporal_namespace #### temporal_cloud_v0_schedule_buffer_overruns_count When average schedule run length is greater than average schedule interval while a `buffer_all` overlap policy is configured. Labels: temporal_account, temporal_namespace #### temporal_cloud_v0_schedule_missed_catchup_window_count Skipped Scheduled executions when Workflows were delayed longer than the catchup window. Labels: temporal_account, temporal_namespace #### temporal_cloud_v0_schedule_rate_limited_count Workflows that were delayed due to exceeding a rate limit. Labels: temporal_account, temporal_namespace ### Service latency metrics {#service-latency} #### temporal_cloud_v0_service_latency_bucket Latency for `SignalWithStartWorkflowExecution`, `SignalWorkflowExecution`, `StartWorkflowExecution` operations. Labels: temporal_account, temporal_namespace, le, operation, temporal_service_type #### temporal_cloud_v0_service_latency_count Count of latency observations for `SignalWithStartWorkflowExecution`, `SignalWorkflowExecution`, `StartWorkflowExecution` operations. Labels: temporal_account, temporal_namespace, operation, temporal_service_type #### temporal_cloud_v0_service_latency_sum Sum of latency observation time for `SignalWithStartWorkflowExecution`, `SignalWorkflowExecution`, `StartWorkflowExecution` operations. Labels: temporal_account, temporal_namespace, operation, temporal_service_type ### Workflow metrics {#workflow} #### temporal_cloud_v0_workflow_cancel_count Workflows canceled before completing execution. Labels: temporal_account, temporal_namespace, operation, temporal_service_type #### temporal_cloud_v0_workflow_continued_as_new_count Workflow Executions that were Continued-As-New from a past execution. Labels: temporal_account, temporal_namespace, operation, temporal_service_type #### temporal_cloud_v0_workflow_failed_count Workflows that failed before completion. Labels: temporal_account, temporal_namespace, operation, temporal_service_type #### temporal_cloud_v0_workflow_success_count Workflows that successfully completed. Labels: temporal_account, temporal_namespace, operation, temporal_service_type #### temporal_cloud_v0_workflow_terminate_count Workflows terminated before completing execution. Labels: temporal_account, temporal_namespace, operation, temporal_service_type #### temporal_cloud_v0_workflow_timeout_count Workflows that timed out before completing execution. Labels: temporal_account, temporal_namespace, operation, temporal_service_type ## Metrics labels {#metrics-labels} **What labels can you use to filter metrics?** Temporal Cloud metrics include key-value pairs called labels in their associated metadata. Labels help you categorize and differentiate metrics for precise filtering, querying, and aggregation. Use labels to specific attributes or compare values, such as numeric buckets in histograms. This added context enhances the monitoring and analysis capabilities, providing deeper insights into your data. Use the following labels to filter metrics: | Label | Explanation | | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `le` | Less than or equal to (`le`) is used in histograms to categorize observations into buckets based on their value being less than or equal to a predefined upper limit. | | `operation` | This includes gRPC operations and general Cloud operations such as:SignalWorkflowExecutionStartBatchOperationStartWorkflowExecutionTaskQueueMgrTerminateWorkflowExecutionUpdateNamespaceUpdateSchedule See: [Metric Operations](#metrics-operations) and [Temporal Cloud Operation reference](/references/operation-list)| | `resource_exhausted_cause` | Cause for resource exhaustion. | | `task_type` | Activity, Workflow, or Nexus. | | `temporal_account` | Temporal Account. | | `temporal_namespace` | Temporal Namespace. | | `temporal_service_type` | Frontend or Matching or History or Worker. | | `is_background` | This label on `temporal_cloud_v0_total_action_count` indicates when actions are produced by a Temporal background job, for example: hourly Workflow Export. | | `namespace_mode` | This label on `temporal_cloud_v0_total_action_count` indicates if actions are produced by an active vs a standby Namespace. For a regular Namespace, `namespace_mode` will always be “active”. | The following is an example of how you can filter metrics using labels: ```text temporal_cloud_v0_poll_success_count{__rollup__="true", operation="TaskQueueMgr", task_type="Activity", temporal_account="12345", temporal_namespace="your_namespace.12345", temporal_service_type="matching"} ``` ## Operations {#metrics-operations} **What operation labels are captured by Temporal Cloud?** Operations are a special class of metrics label. They describe the context during which a metric was captured. Temporal Cloud includes the following operations labels: - AdminDescribeMutableState - AdminGetWorkflowExecutionRawHistory - AdminGetWorkflowExecutionRawHistoryV2 - AdminReapplyEvents - CountWorkflowExecutions - CreateSchedule - DeleteSchedule - DeleteWorkflowExecution - DescribeBatchOperation - DescribeNamespace - DescribeSchedule - DescribeTaskQueue - DescribeWorkflowExecution - GetWorkerBuildIdCompatibility - GetWorkerTaskReachability - GetWorkflowExecutionHistory - GetWorkflowExecutionHistoryReverse - ListBatchOperations - ListClosedWorkflowExecutions - OperatorDeleteNamespace - PatchSchedule - PollActivityTaskQueue - PollNexusTaskQueue - PollWorkflowExecutionHistory - PollWorkflowExecutionUpdate - PollWorkflowTaskQueue - QueryWorkflow - RecordActivityTaskHeartbeat - RecordActivityTaskHeartbeatById - RegisterNamespace - RequestCancelWorkflowExecution - ResetStickyTaskQueue - ResetWorkflowExecution - RespondActivityTaskCanceled - RespondActivityTaskCompleted - RespondActivityTaskCompletedById - RespondActivityTaskFailed - RespondActivityTaskFailedById - RespondNexusTaskCompleted - RespondNexusTaskFailed - RespondQueryTaskCompleted - RespondWorkflowTaskCompleted - RespondWorkflowTaskFailed - SignalWithStartWorkflowExecution - SignalWorkflowExecution - StartBatchOperation - StartWorkflowExecution - StopBatchOperation - TerminateWorkflowExecution - UpdateNamespace - UpdateSchedule - UpdateWorkerBuildIdCompatibility - UpdateWorkflowExecution As the following table shows, certain [metrics groups](#available-metrics) support [operations](#metrics-operations) for aggregation and filtering: | Metrics Group / Operations | All Operations | SignalWithStartWorkflowExecution / SignalWorkflowExecution / StartWorkflowExecution | TaskQueueMgr | CompletionStats | | ----------------------------------------------- | -------------- | ----------------------------------------------------------------------------------- | ------------ | --------------- | | **[Frontend Service Metrics](#frontend)** | X | | | | | **[Service Latency Metrics](#service-latency)** | | X | | | | **[Poll Metrics](#poll)** | | | X | | | **[Workflow Metrics](#workflow)** | | | | X | --- ## Automated Migration Automated migration is designed to provide a zero-downtime, secure means of migrating to Temporal Cloud. This guide outlines the current process for transitioning workflows from a self-hosted setup to one hosted within Temporal Cloud. :::tip Support, stability, and dependency info Automated migration is currently in [Pre-release](/evaluate/development-production-features/release-stages#pre-release). ::: ### Solution overview As illustrated below, there are 2 components that support automated migrations: 1. Migration proxy - The [s2s proxy](https://hub.docker.com/r/temporalio/s2s-proxy/tags) provides a security layer between the self-hosted server and the migration server. The customer-side proxy is installed on your infrastructure, while the cloud-side proxy is managed by Temporal. Communications to the proxies are secured via [mutual TLS](https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/) (mTLS). 2. Migration server - A Temporal service (server) enables secure connections between your self-hosted setup and Temporal Cloud. It provides the core functionality of the service. ![Temporal automated migration components](/img/cloud/migration/auto-migration-components.png) ### Process overview The migration process is separated into several phases, part of which involves coordinating with Temporal to create necessary cloud-side resources. Migration involves the following phases: 1. Prepare - Preparing for a migration will require you to deploy a customer-side migration proxy, and will also involve coordinating with Temporal to configure a migration server. 2. Initiate - Use the _[StartMigrationRequest](https://pkg.go.dev/github.com/temporalio/tcld#readme-start-a-migration)_ API to specify namespaces for migration along with endpoint details. A corresponding namespace is created in Temporal Cloud with a “non-active” status. You will configure permissions and access controls during this phase. 3. Monitor - The _[GetMigrationResponse](https://pkg.go.dev/github.com/temporalio/tcld#readme-get-a-migration)_ API allows you to track replication progress, including workflows replicated, remaining workflows, and estimated time for completion. 4. Handover - Once replication is complete, you may use the _[HandoverNamespaceRequest](https://pkg.go.dev/github.com/temporalio/tcld#readme-perform-handover-during-a-migration)_ API to switch traffic between your source namespace (self-hosted) and target namespace (cloud). This is the opportunity to validate functionality within Temporal Cloud prior to finalizing the migration. 5. Finalize - Use _[ConfirmMigrationRequest](https://pkg.go.dev/github.com/temporalio/tcld#readme-confirm-a-migration)_ API to finalize the migration. In the event of issues, you may use the _[AbortMigrationRequest](https://pkg.go.dev/github.com/temporalio/tcld#readme-abort-a-migration)_ to roll-back changes without impacting your workflows. These APIs provide granular control over every step of the process, ensuring transparency and flexibility. ### Current limitations The following are known limitations for the Pre-release phase of this service. - OSS server versions 1.22 or newer are required. Refer to the [upgrade](https://docs.temporal.io/self-hosted-guide/upgrade-server#upgrade-server) procedure as needed. - History shard counts must be a multiple of two. - Enabling payload encryption as part of migration is not yet supported. If payloads are already [encrypted](https://docs.temporal.io/payload-codec#encryption) in your self-hosted server via data converter, then they will remain encrypted during and after migration. - If you are using multi-cluster replication in your self-hosted setup and have previously failed over namespaces, then this may impact your eligibility for automated migration. - If you have multiple self-hosted servers and they are all configured with the same cluster name (by default Temporal uses 'active' as cluster name), they cannot be connected to a single migration server simultaneously due to cluster name collision. There are 2 available options: 1. Migrate one server at a time using a single migration server. 2. Create multiple migration servers (one for each self-hosted server) if you need to migrate all servers simultaneously. - OSS supports cross-namespace commands (e.g., parent-child, SignalExternal, CancelExternal) through the `system.enableCrossNamespaceCommands` configuration. This configuration is disabled on Temporal Cloud. If cross-namespace calls exist within workflow code, they must be updated or removed prior to migration. ### Getting started To prepare for migration, you must first provide qualification details to Temporal via a support ticket. If eligible, the Temporal team will work with you to facilitate your migration. Submit a support ticket with the following details: - A list of your Temporal accounts - Target Temporal Cloud service regions - For each cluster, provide the server configuration by running `temporal operator cluster describe --address --output json` (see [notes](#alternative-commands-for-versions-1281-and-prior) for Temporal server version 1.28.1 and prior). - Metrics for the namespaces to be migrated: - number of open/closed workflows - storage used - retention policy - any custom search attributes - peak request per second (RPS) and action per second (APS) - refer to [this document](https://docs.google.com/document/d/151xjeI53SBfJ94X1toi5krPp4oeyzJ6wVUrOBhgK714) for instructions on fetching these metrics - If you use a SQL-based datastore for visibility, and you use custom search attributes, provide _CustomSearchAttributeAliases_ of your namespace by running temporal operator namespace describe using the [latest Temporal CLI](https://github.com/temporalio/cli). :::warning Proceed only when your request has been approved by Temporal. ::: ### Create cloud-side resources Cloud-side resources must be in place prior to starting a migration. Complete the following procedure. 1. Create one or more empty namespaces in Temporal Cloud to serve as the migration targets. Since migration cannot proceed into a namespace that's already in use, these namespaces should remain empty (no workflows). 2. Create a support ticket requesting these namespaces be configured with [system limits](https://docs.temporal.io/cloud/limits) (including APS/RPS) matching your existing self-hosted workload. 3. Verify that a migration endpoint has been created in Temporal Cloud (e.g., `your-endpoint.{your-acct}.tmprl.cloud`). If you don't have one, request one via a support ticket. 4. Create any required custom search attributes used by your workflow. 5. If you need [private connectivity](https://docs.temporal.io/cloud/connectivity) for the namespace in Temporal Cloud, then prepare this setup in advance. ### Prepare your self-hosted service 1. Set the following [dynamic configurations](https://docs.temporal.io/references/dynamic-configuration) settings and then restart the Temporal frontend. ```yaml frontend.keepAliveMaxConnectionAge: - value: "2h" ``` 2. If not already enabled, enable _GlobalNamspace_ by updating the _clusterMetadata_ and the _dcRedirectionPolicy_ in your [server config yaml file](https://github.com/temporalio/temporal/tree/main/config) to the following and restart all Temporal services (frontend, history, matching, worker), starting with the frontend. ```yaml dcRedirectionPolicy: policy: "all-apis-forwarding" clusterMetadata: enableGlobalNamespace: true # add this failoverVersionIncrement: 1000000 # to match failoverVersionIncrement in our migration server masterClusterName: _NO_CHANGE_ currentClusterName: _NO_CHANGE_ clusterInformation: _NO_CHANGE_: enabled: true initialFailoverVersion: [1,100] # pick a unique number between 1 and 100 for each server rpcName: _NO_CHANGE_ rpcAddress: _NO_CHANGE_ ``` 3. Run `temporal operator cluster describe` to check the output. The following output is expected: ```yaml "failoverVersionIncrement": "1000000", "initialFailoverVersion": "the number you picked" "isGlobalNamespaceEnabled": true ``` 4. Create a CA certificate as a PEM file and share it in your support ticket. This is required for [mutual TLS](https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/) (mTLS) between your s2s-proxy (as client) and the Temporal s2s-proxy (as server). Note that this differs from the CA used for [accessing namespace](https://docs.temporal.io/cloud/certificates) in Temporal Cloud. ### Deploying the self-hosted s2s proxy Complete the following steps in order to deploy and test the self-hosted s2s proxy. The proxy needs to be accessible from your server and requires a public internet connection to connect to the Temporal s2s-proxy via migration endpoint. 1. Obtain the latest Docker image from the [temporalio/s2s-proxy repository](https://hub.docker.com/r/temporalio/s2s-proxy/tags). 2. Gather the CA certificate generated in the previous steps. 3. Deploy 1 replica of the s2s-proxy (minimum 4 CPU and 512mb memory) and configure s2s-proxy using our [helm chart example](https://github.com/temporalio/s2s-proxy/blob/main/charts/s2s-proxy/README.md) as a reference. 4. Test access by running `temporal operator cluster describe --address {the-outbound-external-address-of-your-proxy}`. It should display the information of the migration server. 5. Request the Temporal team to perform a similar test to verify that the migration server can access your server. Proceed only when you have received confirmation from Temporal that connectivity tests have passed. ### Migration process Once all prerequisites have been completed, migration may commence. #### Migration start A Temporal operator will initiate the following migration commands on your behalf. The migration proceeds through these phases: 1. Start migration - Your self-hosted namespace remains active while the cloud namespace becomes passive. Workflows are replicated from the self-hosted namespace to the cloud namespace. Once cloud namespace is in-sync with self-hosted namespace, migration is ready for handover. 2. Handover to the Cloud - The cloud namespace becomes active and the self-hosted namespace becomes passive. Workflows are then replicated from the cloud namespace to the self-hosted server. 3. Confirm migration completion - Workflow replication from the cloud namespace to the self-hosted server stops. Before the final step, it is possible to hand over the namespace back to the self-hosted server or abort the migration process completely. After this step, migrating namespaces from the cloud back to a self-hosted server is an unsupported operation. #### Transfer to cloud There are two options for switching Temporal clients to the cloud: - Option 1 (recommended) - Deploy two sets of Temporal clients: one pointing to your Temporal server and one to the Cloud namespace endpoint. This is the recommended option since your workflows will continue to make progress during the handover, even if your cloud Temporal client is unable to access the cloud (due to misconfiguration, for example). The process is as follows: 1. Direct your cloud-based Temporal clients to the cloud namespace endpoint. Initially, these clients will connect and send Poll requests but will not receive any tasks. 2. Start migration. Your self-hosted namespace is active while your cloud namespace is passive (or standby). Your cloud Temporal clients can begin receiving tasks, but all requests from cloud clients to the Cloud namespace will automatically forward from the cloud to your self-hosted server. 3. Hand over namespace to the cloud. Your cloud namespace becomes active and your self-hosted namespace becomes passive. All requests from your self-hosted Temporal clients will automatically forward from your server to the cloud. 4. Complete migration. Your self-hosted Temporal clients will no longer receive any tasks from your server, allowing you to stop these clients. - Option 2 - Deploy one set of Temporal clients and switch the namespace endpoint during migration. With this option, if your workers are misconfigured during the switch, then it is possible that workflows can stop making progress. It is important to ensure that all workers maintain connectivity to cloud to avoid this scenario. The process is as follows: 1. Start migration. 2. Switch your Temporal clients to point to the cloud namespace endpoint. Requests from your Temporal clients will automatically forward from the cloud to your server. Alternatively, you may switch Temporal clients to the cloud namespace endpoint after handover. 3. Hand over namespace to the cloud. Requests from your Temporal clients will now be served by the cloud and will not be forwarded to your server. 4. Confirm migration completion. #### Final Validation Before failing over your namespace to Temporal Cloud, review the following production readiness checklist. These steps help ensure a smooth transition and minimize surprises in production. - Understand how to access metrics for your namespace on Temporal Cloud. - General workflow monitoring metrics (schedule to start latency, start v.s. completion rate, sync match rate, etc). - Learn how capacity management works in Temporal Cloud, including how to request capacity increases. - Plan for a worker tuning session - performance change between Temporal Cloud v.s. self-hosted cluster, which could lead to unexpected symptoms and optimizations. - Know how to reach out to your Temporal Solutions Architect (SA) and Account Executive (AE) for assistance. ### Additional Notes #### Special dynamic configuration for Version 1.22.x-1.23.x Temporal versions 1.22.x and 1.23.x include support for stream-based replication, but it is disabled by default. Since those releases, stream-based replication has been validated as more reliable than the poll-based replication that remained the default in 1.22 and 1.23. When preparing for an S2C migration on these versions, configure the following dynamic settings to enable stream-based replication: ```yaml history.enableReplicationStream: - value: true ``` Enabling this configuration will require a restart of your history pods. #### Alternative Commands for Versions 1.28.1 and Prior The `temporal operator cluster describe` command is missing some details needed for migration when run against Version 1.28.1 or older. If you are running that version of Temporal or older, then substitute the below commands for the "temporal operator cluster describe" command cited in this document. If you are using tctl, then substitute the following command: `tctl --address admin cluster describe` If you are not a tctl user, then use the following alternative: `grpcurl -v -plaintext temporal.server.api.adminservice.v1.AdminService.DescribeCluster` --- ## Migrate Learn how to migrate your Temporal workflows with zero downtime: - [Automated Migration](/cloud/migrate/automated) - This process enables seamless transitions from self-hosted Temporal instances to Temporal Cloud. - [Manual Migration](/cloud/migrate/manual) - This process enables transitions from self-hosted Temporal instances to Temporal Cloud by updating clients and workflows to utilize new resources within Temporal Cloud. - [Migrate between regions](/cloud/migrate/migrate-within-cloud) - This process allows you to migrate a Temporal Cloud Namespace between regions or providers. --- ## Manual Migration Migrating to Temporal Cloud from a self-hosted Temporal Service will have different requirements depending on your usage. This guide provides some guidance based on our experience helping customers of all sizes successfully migrate. ### What to expect from a migration Depending on your Workflows' requirements, the migration process may be as simple as changing a few parameters, or may require more extensive code changes. There are two aspects to consider when migrating: your Temporal Client connection code and your Workflow Executions. Here's a high-level overview of what you can expect: - **Introduce another Temporal Client to your Starter and Worker Processes:** Configure and deploy a new Temporal Client so that Temporal Cloud becomes responsible for new Workflow Executions. - **Migrate Workflow Executions:** There are different approaches for new, running, and completed Workflow Executions. - **New Workflow Executions:** When you no longer need to send Signals or Queries to your self-hosted Temporal Service, you can deprecate your old Client code. Until then, your self-hosted Temporal Service can receive relevant traffic, while new Workflow Executions are sent to Temporal Cloud. - **Running Workflow Executions:** Short-running Workflows can often be drained and then started again on Temporal Cloud. Long-running Workflows that cannot be drained might require you to implement more code changes to pass the state of the currently running Workflow to Temporal Cloud. - **Completed Workflow Executions:** Completed Workflow Execution History cannot be automatically migrated to Temporal Cloud. Refer to [Multi-Cluster Replication](#multi-cluster-replication) for more information. ### Updating Client connection code in your Workers Whether you're self-hosting Temporal or using Temporal Cloud, you manage runtime of your code. To migrate your Workflows to Temporal Cloud, you need to change some parameters in the Client connection code, such as updating the namespace and gRPC endpoint. The changes needed to direct your Workflow to your Temporal Cloud Namespace are only a few lines of code, including: - [Add your SSL certificate and private key](/cloud/saml) associated with your Namespace. - [Copy the Cloud-hosted endpoint](/cloud/namespaces#temporal-cloud-grpc-endpoint) from the Namespace detail Web page. The endpoint uses this format: `..tmprl.cloud:port`. - [Connect to Temporal Cloud](/cloud/get-started) with your Client. - [Configure tcld, the Cloud CLI](/cloud/tcld), with the same address, Namespace, and certificate used to create a Client through code. ### Migrating your Workflow Executions A Temporal Service stores the complete Event History for the entire lifecycle of a Workflow Execution. To migrate from a self-hosted Temporal Service to Temporal Cloud, take into account the current state, Event History, and any future expectations of your Workflow Executions. **New Workflows are automatically executed on Temporal Cloud.** Once you've made the code changes in Step 1, and your new code is deployed, new Workflow Executions will be sent to Temporal Cloud. Existing Workflows must receive Signals to migrate and re-execute on Cloud. If you maintain your self-hosted instance, you will still be able to use it to access any execution history from before your migration. You can also export JSON from your previous execution history, that you can then import into your own analytics system. **Running Workflows can either be drained or migrated.** If your Workflow can be completed before any compelling event which drives a move to Temporal Cloud, those Workflows can be automatically restarted on Temporal Cloud. If your Workflows need to run continuously, you must migrate Workflows while they are running. To accomplish this migration, cancel your current Workflow and pass the current state to a new Workflow in Temporal Cloud. Refer to [this repository](https://github.com/temporalio/temporal-migration) for an example of migrating running Workflows in Java. When performing a live migration, make sure your Worker capacity can support the migration load. Both a [Signal](/sending-messages#sending-signals) and a [Query](/sending-messages#sending-queries) will be executed during the course of the migration. Also, the Query API loads the entire history of Workflows into Workers to compute the result (if they are not already cached). That means that your self-hosted Temporal Service Worker capacity will need to support having those executions in memory to serve those requests. The volume of these requests might be high to execute against all the matches to a `ListFilter`. ### Considerations when resuming Workflows on a new Temporal Service or Namespace - **Skipping Steps:** If your Workflow steps cannot guarantee idempotency, determine whether you need to skip those steps when resuming the execution in the target Namespace. - **Elapsed Time:** If your Workflow is “resuming sleep” when in the target Namespace, determine how you will calculate the delta for the sleep invocation in the new execution. - **Child Relationships:** If your Workflow has Child Workflow relationships (other than Detached Parent Close Policy children), determine how you can pass the state of those children into the parent to execute the child in a resumed state. - **Heartbeat state:** If you have long running activities relying on heartbeat state, determine how you can resume these activities in the target Namespace. - Child Workflows with the same type as their Parent types are returned in List Filters used to gather relevant executions. Unless these are Detached `ParentClosePolicy` children, this is not what you want since the Parent/Child relationship will not be carried over to the target Namespace. - Long running activities that use heartbeat details will not receive the latest details in the target Namespace. - Duration between Awaitables inside a Workflow definition needs to be considered for elapsed time accuracy when resuming in the target Namespace. - When Signaling directly from one Workflow to another, make sure to handle `NotFound` executions in the target Namespace. The Workflows may resume out of order. ### Other considerations when migrating - Have you added an mTLS certificate to your Temporal Namespace? Review our [documentation for adding a certificate to your Temporal Cloud account](/cloud/certificates) for more information. - There are differences in how metrics are generated in self-hosted Temporal and Temporal Cloud. Review the [documentation on Temporal Cloud metrics](/cloud/metrics/) for more information. - Consider the implications for [security and access to your Temporal Service](/cloud/security). - Review your current load (actions per second) and speak to your Account Executive and Solutions Architect so we can set appropriate [Namespace limits](/cloud/limits). ### Multi-Cluster Replication [Multi-Cluster Replication](/self-hosted-guide/multi-cluster-replication) is an experimental feature which asynchronously replicates Workflow Executions from active Clusters to other passive Clusters for backup and state reconstruction. Migrating Execution History from a self-hosted Temporal Service to Temporal Cloud is not currently supported. However, a migration tool based on Multi-Cluster Replication, which will enable this, is currently in development for Temporal Cloud. If you have used this feature locally or you are interested in using it to migrate to Temporal Cloud, [create a support ticket](https://docs.temporal.io/cloud/support) or watch this space for more information about public availability. --- ## Migrate between regions Temporal Cloud's [High Availability features](/cloud/high-availability) allow you to migrate a Temporal Cloud Namespace from one region or cloud provider to another with zero downtime. ## Preparing to migrate Namespaces using Export will need to stop Export and migrate the region configuration to the new region for Export jobs to continue after migration. See [failover scenarios](/cloud/export#failover-scenarios) for details. [Using High Availability features affects pricing](/cloud/pricing#high-availability-features). ## Steps to migrate 1. Add a Namespace replica in the region you want to migrate to. See [regions](/cloud/regions) for a list of available regions and supported multi-region and multi-cloud configurations. 2. Wait for the replica to become active. The Cloud UI will display a time estimate, and namespace admins will receive an email when the replica is active. 3. If your workers are using API key authentication: ensure your workers (and all other client code) are updated to [use the regional endpoint of the new replica](/cloud/namespaces#access-namespaces). 4. Trigger a failover to the new region. 5. Remove the Namespace replica in the region you are migrating from. :::note If using [API keys](/cloud/api-keys) for worker authentication, you must open a [support ticket](/cloud/support#support-ticket) to remove the replica. ::: :::note All replica changes are subject to a [cooldown period](/cloud/high-availability/enable#changing) before further replica changes can be made. ::: --- ## Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability). Learn why you should use Nexus in the [evaluation guide](/evaluate/nexus). ::: [Temporal Nexus](/nexus) allows you to connect Temporal Applications across (and within) isolated Namespaces. This provides all the benefits of Durable Execution across team and application boundaries with improved modularity, security, debugging, and fault isolation. Nexus supports cross-team, cross-domain, cross-namespace, multi-region, and multi-cloud use cases. Temporal Cloud support is built on top of the [core Nexus experience](/nexus) and adds a global Nexus Registry within an Account, enhanced security, and multi-region connectivity within and across AWS and GCP. :::tip RELATED - [Evaluate](/evaluate/nexus) why you should use Nexus and learn more about [Nexus use cases](/evaluate/nexus#use-cases). - [Learn Nexus concepts](/nexus) in the Encyclopedia. ::: ## Global Nexus Registry The Nexus Registry in Temporal Cloud is scoped to an Account. Workers in any Namespace can host Nexus Services for others to use within an Account. ## Built-in access controls Temporal Cloud has built-in Endpoint access controls to restrict which callers can use a Nexus Endpoint. ## Audit logging Temporal Cloud supports audit log streaming for Nexus Registry actions to create, update, or delete Endpoints. ## Multi-region connectivity Nexus requests in Temporal Cloud are routed across Namespaces, within and across AWS and GCP, using a global mTLS-secured Envoy mesh. Built-in Nexus Machinery provides reliable at-least-once execution and Workflow policy can deduplicate requests for exactly-once execution, even across multi-region boundaries. ## Terraform support The [Terraform provider for Temporal Cloud](/cloud/terraform-provider#manage-temporal-cloud-nexus-endpoints-with-terraform) supports managing Nexus Endpoints. ## Learn more - [Evaluate](/evaluate/nexus) why you should use Nexus and watch the [Nexus keynote and demo](https://youtu.be/qqc2vsv1mrU?feature=shared&t=2082). - [Learn key Nexus concepts](/nexus) and how Nexus works in the [Nexus deep dive talk](https://www.youtube.com/watch?v=izR9dQ_eIe4&t=934s) - Explore [additional resources](/evaluate/nexus#learn-more) to learn more about Nexus. --- ## Latency and Availability - Temporal Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability). Learn why you should use Nexus in the [evaluation guide](/evaluate/nexus). ::: Nexus requests (commands, polling) have the same latency SLOs and error rate SLAs as other Worker requests in both the caller and handler Namespaces. ## Latency metrics Nexus supports various [latency metrics](/nexus/metrics). ## Worker to Temporal Cloud interactions Nexus interactions between a Worker and Temporal Cloud use the Worker's Namespace gRPC endpoint. Nexus-related Worker interactions with Temporal Cloud have the same [latency SLOs](/cloud/service-availability#latency) and [availability SLAs](/cloud/sla) as other calls to a Namespaces's gRPC endpoint. This applies to the following Nexus-related interactions between a Worker and Temporal Cloud: - Caller Namespace - RespondWorkflowTaskCompleted \- schedule a Nexus Operation. - Handler Namespace - PollNexusTaskQueue \- get a [Nexus Task](/tasks#nexus-task) to process, for example to start a Nexus Operation. - RespondNexusTaskCompleted \- report the Nexus Task was successful. - RespondNexusTaskFailed \- report the Nexus Task failed. ## Nexus connectivity across Namespaces Nexus connectivity in Temporal Cloud is provided by a global mTLS secured Envoy mesh. The cross-namespace latency between the caller's Nexus Machinery and the handler's Nexus Machinery varies based on the locality of the caller and handler Namespaces, which may be placed in different regions. Communication between Namespaces in the same region will have lower latency. Communication across different regions will have higher latency. Consult the cross-region latency tables for your cloud provider(s) to estimate the latency for Nexus communication across Namespaces in Temporal Cloud. --- ## Limits - Temporal Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability). Learn why you should use Nexus in the [evaluation guide](/evaluate/nexus). ::: Temporal Cloud has default limits for several aspects of Nexus. Many of these defaults are configurable, so if you need them changed please open a support ticket. ## Rate Limiting Nexus requests (commands, polling) are counted as part of the overall Namespace RPS limit in both the caller and handler Namespaces. Default Namespace RPS limits are set at 1600 and automatically adjust based on recent usage (over prior 7 days). ## Operational Limits Nexus has operational limits for thing like the maximum number of Nexus Endpoints and the maximum request handler timeout. ### Max Nexus Endpoints By default, each account is provisioned with a max of 100 Nexus Endpoints. You can request further increases beyond the initial 100 Endpoint limit by opening a support ticket. ### Workflow Max Nexus Operations A single Workflow Execution can have a maximum of 30 in-flight Nexus Operations. See the Nexus Encyclopedia entry for [additional details](/workflow-execution/limits#workflow-execution-nexus-operation-limits). ### Nexus Request Handler Timeout Nexus Operation handlers have less than 10 seconds to process a single Nexus start or cancel request. Handlers should observe the context deadline and ensure they do not exceed it. This includes fully processing a synchronous Nexus operation and starting an asynchronous Nexus operation, for example one that starts a Workflow. If a handler doesn’t respond within a context deadline, a context deadline exceeded error will be tracked in the caller Workflow’s pending Nexus operations, and the Nexus Machinery will retry the Nexus request with an exponential backoff policy. ### Nexus Operation Maximum Duration Each Nexus Operation has a maximum ScheduleToClose duration of 60 days, which is most applicable to asynchronous Nexus Operations that are completed with an asynchronous callback using a separate Nexus request from the handler back to the caller Namespace. The 60 day maximum is a limit we will look to increase at some point in the future. While the caller of a Nexus Operation can configure the ScheduleToClose duration to be shorter than 60 days, the maximum duration can not be extended beyond 60 days and will be capped by the server to 60 days. --- ## Obserability - Temporal Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability). Learn why you should use Nexus in the [evaluation guide](/evaluate/nexus). ::: Nexus provides metrics and audit log streaming, in addition to integrated [execution debugging](/nexus/execution-debugging). ## Metrics Nexus provides the following metrics: - [SDK metrics](/nexus/metrics#sdk-metrics) \- emitted by a Worker. - [Cloud metrics](/nexus/metrics#cloud-metrics) \- emitted by Temporal Cloud. ## Audit Logging The following Nexus control plane actions are sent to the [Audit Logging](/cloud/audit-logs) integrations: - Create Nexus Endpoint: `CreateNexusEndpoint` - Update Nexus Endpoint: `UpdateNexusEndpoint` - Delete Nexus Endpoint: `DeleteNexusEndpoint` --- ## Pricing for Temporal Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability). Learn why you should use Nexus in the [evaluation guide](/evaluate/nexus). ::: The pricing for [Temporal Nexus](/evaluate/nexus) is: - **One Action to start or cancel a Nexus Operation** in the caller Namespace. The underlying Temporal primitives such as Workflows, Activities, Signals created by a Nexus Operation handler (directly or indirectly) result in the normal Actions for those primitives. This includes retries for underlying Temporal primitives like Activities. - **No Action results for handling or retrying the Nexus Operation itself**. However, while the retry of the Nexus Operation incurs no charge, any billable action initiated by the handler (such as an Activity) will be charged if it fails and is subsequently retried. See [Pricing](/cloud/pricing) for additional details. ## Learn more - [Evaluate](/evaluate/nexus) why you should use Nexus and watch the [Nexus keynote and demo](https://youtu.be/qqc2vsv1mrU?feature=shared&t=2082). - Learn how Nexus works in the [Nexus deep dive talk](https://www.youtube.com/watch?v=izR9dQ_eIe4) and [Encyclopedia](/nexus). - [Additional resources](/evaluate/nexus#learn-more) to learn more about Nexus. --- ## Notifications ## Get notified about Temporal Cloud status {#cloud-status} In the event of an incident, Temporal updates the [Temporal Cloud status page](https://status.temporal.io/) with important updates. Users can subscribe to updates in their preferred mode (e.g. email, Slack, SMS, etc.) by visiting this page. ## Get notified about administrative events {#admin-notifications} Temporal Cloud sends emails to notify users of important administrative events. | Reason for email | Who receives email | |------------------ | -------------------| | Certificate Expiring in 15 days | Global Administrator, Namespace Administrator, Account Owner | | Certificate Expiring in 10 days | Global Administrator, Namespace Administrator, Account Owner | | Certificate Expiring in 5 days | Global Administrator, Namespace Administrator, Account Owner | | API Key Expiring in 30 days | Global Administrator, Account Owner, individual user (if API Key has an owner) | | API Key Expiring in 20 days | Global Administrator, Account Owner, individual user (if API Key has an owner) | | API Key Expiring in 10 days | Global Administrator, Account Owner, individual user (if API Key has an owner) | | Sign up credit expiring in 30 days | Account Owner, Finance Administrator | | Sign up credit expiring in 14 days | Account Owner, Finance Administrator | | Sign up credit expiring in 7 days | Account Owner, Finance Administrator | | Sign up credit expiring in 1 days | Account Owner, Finance Administrator | | Sign up credit is 50% consumed | Account Owner, Finance Administrator | | Sign up credit is 90% consumed | Account Owner, Finance Administrator | | Account plan type changed | Global Administrator, Account Owner, Finance Administrator | | Namespace Failover Completed/Failed | Global Administrator, Namespace Administrator, Account Owner | To ensure that you receive email notifications, configure your junk-email filters to permit email from `noreply@temporal.io`. To provide feedback on notifications or request changes, [create a support ticket](/cloud/support#support-ticket). --- ## Cloud Ops API :::tip Support, stability, and dependency info The Temporal Cloud Operations API is in [Public Preview](/evaluate/development-production-features/release-stages#public-preview). ::: The Temporal Cloud Operations API, or the Cloud Ops API, is an open source, public [HTTP API](https://saas-api.tmprl.cloud/docs/httpapi.html#description/introduction) and [gRPC API](https://github.com/temporalio/cloud-api/tree/main) for programmatically managing Temporal Cloud control plane resources, including [Namespaces](/cloud/namespaces), [Users](/cloud/users), [Service Accounts](/cloud/service-accounts), [API keys](/cloud/api-keys), and others. The Temporal Cloud [Terraform Provider](/cloud/terraform-provider), [tcld CLI](/cloud/tcld), and Web UI all use the Cloud Ops API. ## Develop applications with the Cloud Ops API You can use the HTTP API or the gRPC API depending on how you need to integrate with your platform. The URL to access both the HTTP and gRPC Cloud Ops API is `saas-api.tmprl.cloud`. ### Prerequisites These prerequisites are required for using either HTTP or gRPC. - [Temporal Cloud user account](/cloud/get-started) - [API Key](/cloud/tcld/apikey#create) for authentication ### Use cases Some common reasons you might use the API are to: - Provision Namespaces per environment or tenant via pipelines. - Bootstrap new projects by creating users, assigning roles, and creating Namespaces via custom scripts. - Rotate service account keys on a schedule with a job. - Audit and report access across orgs with scheduled HTTP requests. ### Using HTTP [The HTTP API](https://saas-api.tmprl.cloud/docs/httpapi.html#description/introduction) supports the same operations as the [gRPC API](#using-grpc), but it's usable via standard HTTP methods and authentication. This may be a more convenient option if you are writing automation scripts for CI/CD or you can't use gRPC due to network policies, proxies, tooling gaps, or language/runtime constraints. Since it's standard HTTP, it's language agnostic giving you the ability to run cloud operations consistently. :::note This *does not* allow interaction with individual Workflows or Activities via HTTP. ::: ### Using gRPC *For Go developers:* - Use the [Go SDK](https://github.com/temporalio/cloud-sdk-go) for the simplest setup experience *For other programming languages:* - Basic familiarity with gRPC and Protocol Buffers (protobuf) - [Protocol Buffers](https://github.com/protocolbuffers/protobuf/releases) - [gRPC](https://grpc.io/docs/languages/) in your preferred programming language You can use the provided proto files to generate client libraries in your desired programming language, and then use that client to access the gRPC API. You can also find the [full gRPC docs on Buf](https://buf.build/temporalio/cloud-api/docs/main:temporal.api.cloud.cloudservice.v1#temporal.api.cloud.cloudservice.v1.CloudService). #### Using the Go SDK If you're developing in Go, we recommend using the [Go SDK](https://github.com/temporalio/cloud-sdk-go) which provides pre-compiled Go bindings and a more idiomatic interface. The Go SDK handles all the protobuf compilation and provides ready-to-use Go types and client interfaces. You can also use the [Go samples](https://github.com/temporalio/cloud-samples-go) to help you get started with the Cloud Ops API using the Go SDK. To start using the Go SDK with the Cloud Ops API, follow these steps: 1. Install the Go SDK: ```go go get github.com/temporalio/cloud-sdk-go ``` 2. Import and use the SDK: ```go "github.com/temporalio/cloud-sdk-go/client" ) ``` 3. The Go SDK provides pre-built client interfaces that handle authentication and connection setup. Refer to the [Go samples](https://github.com/temporalio/cloud-samples-go) for detailed usage examples. The Go SDK eliminates the need to work directly with generated protobuf files and provides a more idiomatic Go experience. #### Compile the API and use the generated code (For other languages) For programming languages other than Go, download the gRPC protobufs from the [Cloud Ops API repository](https://github.com/temporalio/cloud-api/tree/main/temporal/api/cloud) and compile them manually. Use [gRPC](https://grpc.io/docs/) to compile and generate code in your preferred [programming language](https://grpc.io/docs/#official-support). The steps below use Python as an example and require [Python's gRPC tools](https://grpc.io/docs/languages/python/quickstart/#grpc-tools) to be installed, but the approach can be adapted for other supported programming languages. 1. Clone the Temporal Cloud API repository: ```command git clone https://github.com/temporalio/cloud-api.git cd cloud-api ``` 2. Copy Protobuf files: - Navigate to the `temporal` directory. - Copy the protobuf files to your project directory. 3. Compile the Protobuf files: ```python python -m grpc_tools.protoc -I./ --python_out=./ --grpc_python_out=./ *.proto ``` - `-I` specifies the directory of the `.proto` files. - `--python_out=` sets the output directory for generated Python classes. - `--grpc_python_out=` sets the output directory for generated gRPC service classes. - `*.proto` processes all `.proto` files. After compiling the Protobuf files, you will have generated code files in your project directory. These files enable interaction with the Temporal Cloud API in your chosen programming language. 4. Import the Generated Files: - Locate the Python files (.py) generated in your project directory. - Import these files into your Python application where you intend to interact with the Temporal Cloud API. 2. Use the API: - Use the classes and methods defined in the imported files to communicate with the Temporal Cloud services. - Ensure to handle any required authentication or configuration as needed for Temporal Cloud. This approach can be adapted for other programming languages by following their respective import and usage conventions for the generated code files. ## Usage guidelines When interacting with the Temporal Cloud Ops API, follow these guidelines: - API version header: - Always include the `temporal-cloud-api-version` header in your requests, specifying the API version identifier. - The current API version can be found [here](https://github.com/temporalio/cloud-api/blob/main/VERSION#L1C1-L1C14). - Connection URL: - Connect to the Temporal Cloud using the gRPC URL: `saas-api.tmprl.cloud:443`. - Engagement steps: - Generate API key: - Obtain an [API Key for authentication](/cloud/api-keys#manage-api-keys). Note that many operations may require Admin privileges. - Set up client: - Establish a secure connection to the Temporal Cloud. Refer to the example [Client setup in Go](https://github.com/temporalio/cloud-samples-go/blob/main/client/temporal/client.go) for guidance. - Execute operations: - For operation specifics, refer to the `cloudservice/v1/request_response.proto` for gRPC messages and `cloudservice/v1/service.proto` for gRPC services. These steps provide a structured approach to using the Temporal Cloud Ops API effectively, ensuring proper authentication and connection setup. ## Rate limits The Temporal Cloud Operations API implements rate limiting to ensure system stability and fair usage across all users. Rate limits are applied based on identity type, with different limits for users and service accounts. ### Account-level rate limit **Total rate limit: 160 requests per second (RPS)** This limit applies to all requests made to the Temporal Cloud control plane by any client (tcld, UI, Cloud Ops API) or identity type (user, service account) within your account. The total account throughput cannot exceed the limit regardless of the number of users or service accounts making requests. ### Per-identity rate limits **User rate limit: 40 RPS per user** This limit applies to all requests made by each user through any client (tcld, UI, Cloud Ops API), regardless of the authentication method used (SSO or API keys). **Service account rate limit: 80 RPS per service account** This limit applies to all requests made by each service account through any client (tcld, Cloud Ops API). ### Important considerations - Rate limits are enforced across all Temporal Cloud control plane operations - Multiple clients used by the same identity (user or service account) share the same rate limit - Authentication method (SSO, API keys) does not affect rate limiting - These limits help ensure system stability and prevent any single account or identity from overwhelming the service ### Request limit increases If your use case requires higher rate limits, you can request an increase by [submitting a support ticket](/cloud/support#support-ticket). When requesting a limit increase, please provide: - Your current usage patterns and requirements - The specific limits you need increased - A description of your use case and why higher limits are necessary ### Provide feedback Your input is valuable! You can provide feedback through the following channels: - Submit request or feedback through a [support ticket](/cloud/support#support-ticket) - Open an issue in the [GitHub Repo](https://github.com/temporalio/cloud-api) --- ## Awsregions ### Asia Pacific - Tokyo (`ap-northeast-1`) - **Cloud API Code**: `aws-ap-northeast-1` - **Regional Endpoint**: `aws-ap-northeast-1.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.ap-northeast-1.vpce-svc-08f34c33f9fb8a48a` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `aws-ap-northeast-2` - `aws-ap-south-1` - `aws-ap-south-2` - `aws-ap-southeast-1` - `aws-ap-southeast-2` - **Multi-Cloud Replication**: - `gcp-asia-south1` ### Asia Pacific - Seoul (`ap-northeast-2`) - **Cloud API Code**: `aws-ap-northeast-2` - **Regional Endpoint**: `aws-ap-northeast-2.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.ap-northeast-2.vpce-svc-08c4d5445a5aad308` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `aws-ap-northeast-1` - `aws-ap-south-1` - `aws-ap-south-2` - `aws-ap-southeast-1` - `aws-ap-southeast-2` - **Multi-Cloud Replication**: - `gcp-asia-south1` ### Asia Pacific - Mumbai (`ap-south-1`) - **Cloud API Code**: `aws-ap-south-1` - **Regional Endpoint**: `aws-ap-south-1.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.ap-south-1.vpce-svc-0ad4f8ed56db15662` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `aws-ap-northeast-1` - `aws-ap-northeast-2` - `aws-ap-south-2` - `aws-ap-southeast-1` - `aws-ap-southeast-2` - **Multi-Cloud Replication**: - `gcp-asia-south1` ### Asia Pacific - Hyderabad (`ap-south-2`) - **Cloud API Code**: `aws-ap-south-2` - **Regional Endpoint**: `aws-ap-south-2.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.ap-south-2.vpce-svc-08bcf602b646c69c1` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `aws-ap-northeast-1` - `aws-ap-northeast-2` - `aws-ap-south-1` - `aws-ap-southeast-1` - `aws-ap-southeast-2` - **Multi-Cloud Replication**: - `gcp-asia-south1` ### Asia Pacific - Singapore (`ap-southeast-1`) - **Cloud API Code**: `aws-ap-southeast-1` - **Regional Endpoint**: `aws-ap-southeast-1.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.ap-southeast-1.vpce-svc-05c24096fa89b0ccd` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `aws-ap-northeast-1` - `aws-ap-northeast-2` - `aws-ap-south-1` - `aws-ap-south-2` - `aws-ap-southeast-2` - **Multi-Cloud Replication**: - `gcp-asia-south1` ### Asia Pacific - Sydney (`ap-southeast-2`) - **Cloud API Code**: `aws-ap-southeast-2` - **Regional Endpoint**: `aws-ap-southeast-2.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.ap-southeast-2.vpce-svc-0634f9628e3c15b08` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `aws-ap-northeast-1` - `aws-ap-northeast-2` - `aws-ap-south-1` - `aws-ap-south-2` - `aws-ap-southeast-1` - **Multi-Cloud Replication**: - `gcp-asia-south1` ### Europe - Frankfurt (`eu-central-1`) - **Cloud API Code**: `aws-eu-central-1` - **Regional Endpoint**: `aws-eu-central-1.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.eu-central-1.vpce-svc-073a419b36663a0f3` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `aws-eu-west-1` - `aws-eu-west-2` - **Multi-Cloud Replication**: - `gcp-europe-west3` ### Europe - Ireland (`eu-west-1`) - **Cloud API Code**: `aws-eu-west-1` - **Regional Endpoint**: `aws-eu-west-1.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.eu-west-1.vpce-svc-04388e89f3479b739` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `aws-eu-central-1` - `aws-eu-west-2` - **Multi-Cloud Replication**: - `gcp-europe-west3` ### Europe - London (`eu-west-2`) - **Cloud API Code**: `aws-eu-west-2` - **Regional Endpoint**: `aws-eu-west-2.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.eu-west-2.vpce-svc-0ac7f9f07e7fb5695` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `aws-eu-central-1` - `aws-eu-west-1` - **Multi-Cloud Replication**: - `gcp-europe-west3` ### North America - Central Canada (`ca-central-1`) - **Cloud API Code**: `aws-ca-central-1` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.ca-central-1.vpce-svc-080a781925d0b1d9d` - **Regional Endpoint**: `aws-ca-central-1.region.tmprl.cloud` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `aws-us-east-1` - `aws-us-east-2` - `aws-us-west-2` - **Multi-Cloud Replication**: - `gcp-us-central1` - `gcp-us-west1` - `gcp-us-east4` ### North America - Northern Virginia (`us-east-1`) - **Cloud API Code**: `aws-us-east-1` - **Regional Endpoint**: `aws-us-east-1.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.us-east-1.vpce-svc-0822256b6575ea37f` - **Same Region Replication**: Available - **Multi-Region Replication**: - `aws-ca-central-1` - `aws-us-east-2` - `aws-us-west-2` - **Multi-Cloud Replication**: - `gcp-us-central1` - `gcp-us-west1` - `gcp-us-east4` ### North America - Ohio (`us-east-2`) - **Cloud API Code**: `aws-us-east-2` - **Regional Endpoint**: `aws-us-east-2.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.us-east-2.vpce-svc-01b8dccfc6660d9d4` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `aws-ca-central-1` - `aws-us-east-1` - `aws-us-west-2` - **Multi-Cloud Replication**: - `gcp-us-central1` - `gcp-us-west1` - `gcp-us-east4` ### North America - Oregon (`us-west-2`) - **Cloud API Code**: `aws-us-west-2` - **Regional Endpoint**: `aws-us-west-2.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.us-west-2.vpce-svc-0f44b3d7302816b94` - **Same Region Replication**: Available - **Multi-Region Replication**: - `aws-ca-central-1` - `aws-us-east-1` - `aws-us-east-2` - **Multi-Cloud Replication**: - `gcp-us-central1` - `gcp-us-west1` - `gcp-us-east4` ### South America - São Paulo (`sa-east-1`) - **Cloud API Code**: `aws-sa-east-1` - **Regional Endpoint**: `aws-sa-east-1.region.tmprl.cloud` - **PrivateLink Endpoint Service**: `com.amazonaws.vpce.sa-east-1.vpce-svc-0ca67a102f3ce525a` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - None - **Multi-Cloud Replication**: - None --- ## Gcpregions ### North America - Iowa (`us-central1`) - **Cloud API Code**: `gcp-us-central1` - **Regional Endpoint**: `gcp-us-central1.region.tmprl.cloud` - **Private Service Connect Service Attachment URI**: `projects/prod-d9ch6v2ybver8d2a8fyf7qru9/regions/us-central1/serviceAttachments/pl-5xzng` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `gcp-us-west1` - `gcp-us-east4` - **Multi-Cloud Replication**: - `aws-ca-central-1` - `aws-us-east-1` - `aws-us-east-2` - `aws-us-west-2` ### North America - Oregon (`us-west1`) - **Cloud API Code**: `gcp-us-west1` - **Regional Endpoint**: `gcp-us-west1.region.tmprl.cloud` - **Private Service Connect Service Attachment URI**: `projects/prod-rbe76zxxzydz4cbdz2xt5b59q/regions/us-west1/serviceAttachments/pl-94w0x` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `gcp-us-central1` - `gcp-us-east4` - **Multi-Cloud Replication**: - `aws-ca-central-1` - `aws-us-east-1` - `aws-us-east-2` - `aws-us-west-2` ### North America - Northern Virginia (`us-east4`) - **Cloud API Code**: `gcp-us-east4` - **Regional Endpoint**: `gcp-us-east4.region.tmprl.cloud` - **Private Service Connect Service Attachment URI**: `projects/prod-y399cvr9c2b43es2w3q3e4gvw/regions/us-east4/serviceAttachments/pl-8awsy` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - `gcp-us-central1` - `gcp-us-west1` - **Multi-Cloud Replication**: - `aws-ca-central-1` - `aws-us-east-1` - `aws-us-east-2` - `aws-us-west-2` ### Europe - Frankfurt (`europe-west3`) - **Cloud API Code**: `gcp-europe-west3` - **Regional Endpoint**: `gcp-europe-west3.region.tmprl.cloud` - **Private Service Connect Service Attachment URI**: `projects/prod-kwy7d4faxp6qgrgd9x94du36g/regions/europe-west3/serviceAttachments/pl-acgsh` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - None - **Multi-Cloud Replication**: - `aws-eu-central-1` - `aws-eu-west-1` - `aws-eu-west-2` ### Asia Pacific - Mumbai (`asia-south1`) - **Cloud API Code**: `gcp-asia-south1` - **Regional Endpoint**: `gcp-asia-south1.region.tmprl.cloud` - **Private Service Connect Service Attachment URI**: `projects/prod-d5spc2sfeshws33bg33vwdef7/regions/asia-south1/serviceAttachments/pl-7w7tw` - **Same Region Replication**: Not Available - **Multi-Region Replication**: - None - **Multi-Cloud Replication**: - `aws-ap-northeast-1` - `aws-ap-northeast-2` - `aws-ap-south-1` - `aws-ap-south-2` - `aws-ap-southeast-1` - `aws-ap-southeast-2` --- ## Private Service | Region | Private Service Connect Service Name | | ---------------------- | ----------------------------------------------------------------------------------------- | | `asia-south1` | `projects/prod-d5spc2sfeshws33bg33vwdef7/regions/asia-south1/serviceAttachments/pl-7w7tw` | | `us-central1` | `projects/prod-d9ch6v2ybver8d2a8fyf7qru9/regions/us-central1/serviceAttachments/pl-5xzn` | | `us-west1 ` | `projects/prod-rbe76zxxzydz4cbdz2xt5b59q/regions/us-west1/serviceAttachments/pl-94w0x` | --- ## RPO and RTO When a cloud outage disrupts a Namespace, Temporal Cloud takes measures to maintain the Namespace's availability and data durability. The time it takes to recover from the outage is called the "recovery time." The amount of data (event histories) lost is called the "recovery point." A durable system should have a low recovery time and recovery point. To help users plan for keeping critical Workflows available during a cloud outage, Temporal Cloud publishes goals for the recovery time and recovery point for each kind of outage. These goals are called the Recovery Time Objective (RTO) and Recovery Point Objective (RPO). These objectives are complementary to Temporal Cloud's [Service Level Agreement (SLA)](/cloud/sla). To achieve the lowest RPO and RTO, Temporal Cloud offers [High Availability](/cloud/high-availability) features that keep Workflows operational with minimal downtime. When High Availability is enabled on a Namespace, the user chooses a region to place a "replica" that will take over in the event of a failure. The location of the replica determines the type of replication used and the type of outages that can be handled. Multi-region Replication is when the active and replica are in different regions on the same cloud (e.g., AWS us-east-1 and AWS us-west-2). Multi-cloud Replication is when the active and replica are in different clouds (e.g., AWS and GCP). Same-region Replication is when the active and replica are in the same region. Temporal always places the active and replica in different [cells](/cloud/overview#cell-based-infrastructure). As Workflows progress in the active region, history events are asynchronously replicated to the replica. Because replication is asynchronous, High Availability does not impact the latency or throughput of Workflow Executions in the active region. If an outage hits the active region or cell, Temporal Cloud will fail over to the replica so that existing Workflow Executions will continue to run and new Workflow Executions can be started. The Recovery Point Objective and Recovery Time Objective for Temporal Cloud depend on the type of outage and which [High Availability](/cloud/high-availability) feature your Namespace has enabled. Temporal Cloud can only set an RPO and RTO for cases where it has the ability to mitigate the outage. Therefore, the below RPOs and RTOs apply to Namespaces that have the corresponding type of replication and have enabled Temporal-initiated failovers, which comes enabled by default. 1. **Availability zone outage**: 1. _Applicable Namespaces:_ All Namespaces 2. _Goals:_ Zero RPO and near-zero RTO 3. _More details:_ Historically, these have been the most common type of outage in the cloud. Temporal Cloud replicates every Namespace across three availability zones. The failure of a single availability zone is handled automatically by Temporal Cloud behind the scenes, with no potential for data loss, and little-to-no observable downtime to the end user. 2. **Cell outage**: 1. _Applicable Namespaces:_ Namespaces with Same-region Replication, Multi-region Replication, or Multi-cloud Replication 2. _Goals:_ 1-minute RPO and 20-minute RTO 3. _More details:_ Temporal Cloud runs on a [cell architecture](/cloud/sla). Each cell contains the software and services necessary to host a Namespace. While unlikely, it's possible for a cell to experience a disruption due to uncaught software bugs or sub-component failures (e.g., an outage in the underlying database). 3. **Regional outage**: 1. _Applicable Namespaces:_ Namespaces with Multi-region Replication or Multi-cloud Replication 2. _Goals:_ 1-minute RPO and 20-minute RTO 3. _More details:_ On [rare occasions](https://temporal.io/blog/how-devs-kept-running-during-the-aws-us-east-1-oct-20-2025), an entire region within a cloud provider will be degraded. Since Namespaces depend on the cloud provider's infrastructure, Temporal Cloud is not immune to these outages. 4. **Cloud-wide outage**: 1. _Applicable Namespaces:_ Namespaces with Multi-cloud Replication 2. _Goals:_ 1-minute RPO and 20-minute RTO 3. _More details:_ An entire cloud provider has an outage across most or all regions. Since cloud providers strive to keep cloud regions de-coupled, these are the rarest outages of all. Still, they [have happened](https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW) in the past. Notes: - The above goals are only applicable to Namespaces that have enabled Temporal-initiated failovers, which comes enabled by default. Temporal-initiated failovers are initiated by Temporal's tooling and/or on-call engineers without user action. Users can always initiate a failover on their Namespace, even when Temporal-initiated failovers are enabled. In an outage, a user-initiated failover will not cancel out or accidentally reverse a Temporal-initiated failover. :::note Temporal highly recommends keeping Temporal-initiated failovers enabled. When Temporal-initiated failovers are _disabled,_ Temporal Cloud cannot set an RPO and RTO for that Namespace, because it cannot control when or if the user will trigger a failover. ::: - The above goals are for unplanned cloud outages. They do not apply to user-initiated failovers during healthy periods (e.g., for DR drills). Read about [triggering a failover](/cloud/high-availability/failovers) to see how a Namespace failover should perform during healthy periods. - As soon as a cloud outage resolves, Temporal's on-call engineers will work to restore service to Namespaces that were not protected by High Availability. A cloud outage can leave lingering effects in Temporal's systems and applications, even after the cloud provider restores the underlying service. Because of this, affected Namespaces may not be immediately available when the underlying service is restored. An affected Namespace's outage may last longer than the cloud provider's outage. - All Namespaces are backed up every 4 hours. If an outage causes data loss on a Namespace that was not protected by High Availability, then Temporal will use the backup to restore as much data as feasible. ## Minimizing the Recovery Point Temporal has put extensive work into tools and processes that minimize the recovery point and achieve its RPO for Temporal-initiated failovers, including: - Best-in-class [data replication technology](https://youtu.be/mULBvv83dYM?si=RDeWb3gVsEtgGM4z&t=334) that keeps the replica up to date with the active. - Monitoring, alerting, and internal SLOs on the replication lag for every Temporal Cloud Namespace. However, user actions on a Namespace can affect the recovery point. For example, suddenly spiking into much higher throughput than a Namespace has seen before could create a period of replication lag where the replica falls behind the active. Temporal provides a [replication lag](/cloud/high-availability/monitoring#replication-lag-metric) metric for each Namespace. This metric approximates the recovery point the Namespace would achieve in a worst case failure at that given moment. :::note Temporal recommends monitoring the replication lag and alerting should it rise too high, e.g., above 1 minute. ::: ## Minimizing the Recovery Time Temporal has put extensive work into tools and processes that minimize the recovery time and achieve its RTO for Temporal-initiated failovers, including: - History events are replicated _asynchronously_. This ensures that the Namespace can still run workflows in the active region even if there are networking blips or outages with the replica region. - Outages are detected automatically. We have extensive internal alerting to detect disruptions to Namespaces, and are ever improving this system. - Battle-tested Temporal Workflows that execute failovers of all Temporal Cloud Namespaces in a given region quickly. - Regular drills where we fail over our internal Namespaces to test our tooling. - Expert engineers on-call 24/7 monitoring Temporal Cloud Namespaces and ready to assist should an outage occur. To achieve the lowest possible recovery times, Temporal recommends that you: - Keep Temporal-initiated failovers enabled on your Namespace (the default) - Invest in a process to detect outages and trigger a manual failover. Users can trigger manual failovers on their Namespaces even if Temporal-initiated failovers are enabled. There are several benefits to combining a manual failover process with Temporal-initiated failovers: - You can detect outages that Temporal doesn't. In the cloud, regional outages don't affect all services equally. It's possible that Temporal--and the services it depends on--are unaffected by the outage, even while your Workers or other cloud infrastructure are disrupted. If you [monitor services in your critical path](https://sre.google/sre-book/monitoring-distributed-systems/) and alert on unusual error rates, you may catch outages before Temporal Cloud does. - You can sequence your failovers in a particular order. Your cloud infrastructure probably contains more pieces than just your Temporal Namespace: Temporal Workers, compute pools, data stores, and other cloud services. If you manually fail over, you can choose the order in which these pieces switch to the replica region. You can then test that ordering with failover drills and ensure it executes smoothly without data consistency issues or bottlenecks. - You can proactively fail over more aggressively than Temporal. While the 20-minute RTO should be sufficient for most use cases, some may strive to hit an even lower RTO. For workloads like high frequency trading, auctions, or popular sporting events, an outage at the wrong time could cause tremendous lost revenue per minute. You can adopt a posture that fails over more eagerly than Temporal does. For example, you could trigger a manual failover at the first sign of a possible disruption, before knowing whether there's a true regional outage. - Even if you have robust tooling to detect an outage and trigger a failover, leaving Temporal-initiated failovers enabled provides a "safety net" in case your automation misses an outage. It also gives Temporal leeway to preemptively fail over your Namespace if we detect that it may be disrupted soon, e.g., by a rolling failure that has impacted other Namespaces but not yours, yet. ## Understanding Temporal's RTO vs. SLA Temporal has both a Recovery Time Objective (RTO) and a Service Level Agreement (SLA). They serve complementary purposes and apply in different situations. | Aspect | RTO | SLA | |-----------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | What is it? | An objective, or high-priority goal, for the total time that an outage disrupts a Namespace. | A contractual agreement that sets an upper bound on the service error rate, with financial repercussions. | | How is it measured? | The achieved recovery time is measured in terms of minutes per outage. | The achieved service error rate is measured in terms of error rate per month. | | How is the calculation performed? | The achieved recovery time in a given outage is the total time between when a disruption to a Namespace began and when the Namespace was restored to full functionality, either after a failover to a healthy region or after the outage has been mitigated. | Temporal measures the percentage of requests to Temporal Cloud that fail, and applies a [formula](/cloud/sla) to get the final percentage for the month. | | Do partial degradations count? | Most outages contain periods of __partial degradation__ where some percentage of Namespace operations fail while the rest complete as normal. When they disrupt a Namespace, periods of partial degradation count in the calculation of the recovery time. | Partial degradations only partially count for the service error rate calculation. A 5-minute window with a 10% error rate would count less than a 5-minute window with a 100% error rate. | | What is excluded? | For partial degradations, what counts as a disruption to a Namespace is subject to Temporal's expert judgment, but a good rule of thumb is a service error rate >=10%. | We exclude outages that are out of Temporal's control to mitigate, e.g., a failure of the underlying cloud provider infrastructure that affects a Namespace without High Availability and Temporal-initiated failovers enabled. If a Namespace has the relevant High Availability feature and has Temporal-initiated failovers enabled, then Temporal can act to mitigate the outage and it does usually count against the SLA. Full exclusions on the [SLA page](/cloud/sla). | The following examples illustrate the RTO and SLA calculations for different types of in a regional outage. These hypothetical Namespaces are based on actual Temporal Cloud performance in a [real-world outage](https://temporal.io/blog/how-devs-kept-running-during-the-aws-us-east-1-oct-20-2025). Suppose that region `middle-earth-1` experienced a cascading failure starting at 10:00:00 UTC, causing various instances and machines to fail over time. Temporal's automatic failover triggered for all Namespaces and completed at 10:15:00 UTC. - Namespace 0 was in the region but its cell was not affected by the outage. The only downtime it had was for a few seconds during the failover operation. It experienced a near-zero Recovery Time, and its service error rate was neglible. Graceful failover was successful, and this Namespace achieved a recovery point of 0. - Namespace 1_A was in the region and its cell experienced a partial degradation that caused 10% of requests to fail in the first 5 minutes, 25% in the second five minutes, and 50% in the third five minutes. Since it was significantly impacted from 10:00:00 to 10:15:00, its Recovery Time was 15 minutes. If it had no other service errors that month, then its service error rate for the month would be: ( (1 - 10%) + (1 - 25%) + (1 - 50%) + 8925 * 100% ) / 8928 = 99.990%. (Note: there are 8928 5-minute periods in a 31-day month.) Graceful failover was successful, and this Namespace achieved a recovery point of 0. - Namespace 1_B was in the same cell as Namespace 2_A, so it also experienced a partial degradation that caused 10% of requests to fail. However, its owner detected the outage via their own tooling and decided to manually fail over at 10:05:00. This Namespace achieved a recovery time of 5 minutes and a service error rate of ( 1 * (1 - 10%) + 8927 * 100% ) / 8928 = 99.998%. Graceful failover was successful, and this Namespace achieved a recovery point of 0. - Namespace 2_A was in the region and its cell was fully network partitioned at the start of the outage, causing 100% of requests to fail. Since it was significantly impacted from 10:00:00 to 10:15:00, its Recovery Time was 15 minutes. If it had no other service errors that month, then its service error rate for the month would be: ( 3 * (1 - 100%) + 8928 * 100% ) / 8640 5-minute periods per month = 99.97%. Because the Namespace was network partitioned, graceful failover did not succeed, and forced failover was used. The recovery point achieved was equal to the replication lag at the time of the network partition, which was a few seconds. - Namespace 2_B was in the region and was fully network partitioned, causing 100% of requests to fail. However, its owner detected the outage via their own tooling and decided to manually fail over at 10:05:00. This Namespace achieved a recovery time of 5 minutes and a service error rate of ( 1 5-minute periods * (1 - 100%) + 8639 5-minute periods * 100% ) / 8640 5-minute periods per month = 99.99%. Because the Namespace was network partitioned, graceful failover did not succeed, and forced failover was used. The recovery point achieved was equal to the replication lag at the time of the network partition, which was a few seconds. All of the above Namespaces were in the affected region and beat the 1-minute RPO. But they achieved varying recovery times and service error rates. - Notice how Namespace 1_A and Namespace 2_A were both automatically failed over with **the same recovery time but different service error rates**. Notice how Namespace 2_B and Namespace 1_A happen to have the **same service error rate but different recovery times**. This illustrates how RTO and SLA can differ, even in the same outage. Both are valuable tools for Temporal Cloud users to measure the availability of their Namespaces. - Notice how the Namespaces that were manually failed over (Namespace 1_B and Namespace 2_B) achieved lower recovery times than the Namespaces that were automatically failed over (Namespace 1_A and Namespace 2_A). This illustrates how **proactive, aggressive manual failover can achieve a better recovery time than automatic failover**. --- ## SAML authentication To authenticate the users of your Temporal Cloud account, you can connect an identity provider (IdP) to your account by using Security Assertion Markup Language (SAML) 2.0. :::info SAML is a paid feature. See the [pricing page](/cloud/pricing) for details. ::: ## Integrate SAML with your Temporal Cloud account 1. Locate your [Temporal Cloud Account Id](/cloud/namespaces#temporal-cloud-account-id). Your Account Id can be viewed and copied from the Temporal Cloud user profile dropdown menu in the top right corner. Alternatively, find your [Namespace Id](/cloud/namespaces#temporal-cloud-namespace-id). The Account Id is the five or six characters following the period (.), such as `f45a2`. You will need the Account Id to construct your callback URL and your entity identifier. 1. Configure SAML with your IdP by following one of these sets of instructions: - [Microsoft Entra ID](#configure-saml-with-azure-ad) - [Okta](#configure-saml-with-okta) 1. [Share your connection information with us and test your connection.](#finish-saml-configuration) ## How to configure SAML with Microsoft Entra ID {#configure-saml-with-azure-ad} If you want to use the general Microsoft login mechanism, you don't need to set up SAML with Entra ID. Just select **Continue with Microsoft** on the Temporal Cloud sign-in page. To use Entra ID as your SAML IdP, create a Microsoft Entra ID Enterprise application. 1. Sign in to the [Microsoft Entra ID](https://portal.azure.com/). 1. On the home page, under **Manage Microsoft Entra ID**, select **View**. 1. On the **Overview** page near the top, select **Add > Enterprise application**. 1. On the **Browse Microsoft Entra ID Gallery** page near the top, select **Create your own application**. 1. In the **Create your own application** pane, provide a name for your application (such as `temporal-cloud`) and select **Integrate any other application you don't find in the gallery**. 1. Select **Save**. 1. In the **Getting Started** section, select **2. Set up single sign on**. 1. On the **Single sign-on** page, select **SAML**. 1. In the **Basic SAML Configuration** section of the **SAML-based Sign-on** page, select **Edit**. 1. In **Identifier (Entity ID)**, enter the following entity identifier, including your Account Id where indicated: ```bash urn:auth0:prod-tmprl:ACCOUNT_ID-saml ``` A correctly formed entity identifier looks like this: ```bash urn:auth0:prod-tmprl:f45a2-saml ``` 1. In **Reply URL (Assertion Consumer Service URL)**, enter the following callback URL, including your Account Id where indicated: ```bash https://login.tmprl.cloud/login/callback?connection=ACCOUNT_ID-saml ``` A correctly formed callback URL looks like this: ```bash https://login.tmprl.cloud/login/callback?connection=f45a2-saml ``` 1. In **Sign on URL**, enter the following login url, including your Account Id where indicated: ```bash https://cloud.temporal.io/login/saml?connection=ACCOUNT_ID-saml ``` A correctly formed login URL looks like this: ```bash https://cloud.temporal.io/login/saml?connection=f45a2-saml ``` 1. You can leave the other fields blank. Near the top of the pane, select **Save**. 1. In the **Attributes & Claims** section, select **Edit**. Configure the following settings. Under **Required claim**: - Set **Unique User Identifier (NameID)** to `user.userprincipalname` - Set the **NameID format** to `emailAddress` These are the default settings for Microsoft Entra ID. Then under **Additional claims**, ensure **Email** and **Name** are present. 1. Collect information that you need to send to us: - In the **SAML Certificates** section of the **SAML-based Sign-on** page, select the download link for **Certificate (Base64)**. - In the **Set up _APPLICATION_NAME_** section of the **SAML-based Sign-on** page, copy the value of **Login URL**. To finish setting up Microsoft Entra ID as your SAML IdP, see [Finish SAML configuration](#finish-saml-configuration). ## How to configure SAML with Okta {#configure-saml-with-okta} To use Okta as your SAML IdP, configure a new Okta application integration. 1. Sign in to the [Okta Admin Console](https://www.okta.com/login/). 1. In the left navigation pane, select **Applications > Applications**. 1. On the **Applications** page, select **Create App Integration**. 1. In the **Create a new app integration** dialog, select **SAML 2.0** and then select **Next**. 1. On the **Create SAML Integration** page in the **General Settings** section, provide a name for your application (such as `temporal-cloud`) and then select **Next**. 1. In the **Configure SAML** section in **Single sign on URL**, enter the following callback URL, including your Account Id where indicated: ```bash https://login.tmprl.cloud/login/callback?connection=ACCOUNT_ID-saml ``` A correctly formed callback URL looks like this: ```bash https://login.tmprl.cloud/login/callback?connection=f45a2-saml ``` 1. In **Audience URI (SP Entity ID)**, enter the following entity identifier, including your Account Id where indicated: ```bash urn:auth0:prod-tmprl:ACCOUNT_ID-saml ``` A correctly formed entity identifier looks like this: ```bash urn:auth0:prod-tmprl:f45a2-saml ``` 1. We require the user's full email address when connecting to Temporal. - In **Name ID format**, select `EmailAddress`. - In **Attribute Statements**, set **email** and **name**. 1. Select **Next**. 1. In the **Feedback** section, select **Finish**. 1. On the **Applications** page, select the name of the application integration you just created. 1. On the application integration page, select the **Sign On** tab. 1. Under **SAML Setup**, select **View SAML setup instructions**. 1. Collect information that you need to send to us: - Copy the IdP settings. - Download the active certificate. To finish setting up Okta as your SAML IdP, see the next section, [Finish SAML configuration](#finish-saml-configuration). ## How to finish your SAML configuration {#finish-saml-configuration} After you configure SAML with your IdP, we can finish the configuration on our side. [Create a support ticket](/cloud/support#support-ticket) that includes the following information: - The sign-in URL from your application - The X.509 SAML sign-in certificate in PEM format - One or more IdP domains to map to the SAML connection Generally, the provided IdP domain is the same as the domain for your email address. You can provide multiple IdP domains. When you receive confirmation from us that we have finished configuration, log in to Temporal Cloud. This time, though, enter your email address in **Enterprise identity** and select **Continue**. Do not select **Continue with Google** or **Continue with Microsoft**. You will be redirected to the authentication page of your IdP. --- ## SCIM user management [SCIM](https://scim.cloud/) lets you integrate your identity provider (IdP) with Temporal Cloud to automate user provisioning and access. Once SCIM is configured, changes in your IdP are automatically reflected in Temporal Cloud, including: - User creation / onboarding - User deletion / offboarding - User membership in groups You can map SCIM groups to Temporal Cloud [roles and permissions](/cloud/users#account-level-roles-and-namespace-level-permissions), so users automatically get the Temporal Cloud access they need based on the groups they belong to. :::info SCIM is a paid feature. See the [pricing page](/cloud/pricing) for details. ::: ## Supported IdP Vendors Supported upstream IdP vendors include: * [Okta](#configure-scim-with-okta) * Microsoft Entra ID (Azure AD) * Google Workspace * OneLogin * CyberArk * JumpCloud * PingFederate * Any SCIM 2.0-compliant provider ## Preparing for SCIM Before starting your work with SCIM, you'll need to complete this checklist: 1. Configure [SAML](/cloud/saml) SSO. 1. Identify your organization's **IdP administrator**, who is responsible for configuring and managing your SCIM integration. Specify their contact details when you reach out to support in the next stage of this process. After completing these steps, you're ready to submit your [support ticket](/cloud/support#support-ticket) to enable SCIM. :::tip Adding and removing users When SCIM is enabled for user management, you can still add and remove users outside of SCIM using the Temporal Cloud interface, until you disable user lifecycle management. You can always change a user's or group's Account Role from the Temporal Cloud interface. ::: ## Onboarding with SCIM and Okta {#configure-scim-with-okta} 1. Temporal Support enables the SCIM integration on your account. Enabling integration automatically emails a configuration link to your Okta administrator. This authorizes them to set up the integration. 1. Your Okta administrator opens the supplied link. The link leads to step-by-step instructions for configuring the integration. 1. Once configured in Okta, Temporal Cloud will begin to receive SCIM messages and automatically onboard and offboard the users and groups configured in Okta. Some points to note: - User and group change events are applied within 10 minutes of them being made in Okta. - User lifecycle management with SCIM also allows user roles to be derived from group membership. - Once a group has been synced in Temporal Cloud, you can use `tcld` to assign roles to the group. For instructions, see the [User Group Management](https://github.com/temporalio/tcld?tab=readme-ov-file#user-group-management) page. --- ## Monitor Temporal Cloud Temporal Cloud metrics help monitor production deployments. This documentation covers best practices for monitoring Temporal Cloud. ## Monitor availability issues When you see a sudden drop in Worker resource utilization, verify whether Temporal Cloud's API is showing increased latency and error rates. ### Reference Metrics - [temporal\_cloud\_v1\_service\_latency\_p99](/cloud/metrics/openmetrics/metrics-reference#temporal_cloud_v1_service_latency_p99) This metric measures latency for `SignalWithStartWorkflowExecution`, `SignalWorkflowExecution`, `StartWorkflowExecution` operations. These operations are mission critical and never [throttled](/cloud/service-availability#throughput). This metric is a good indicator of your lowest possible latency for the 99th percentile of requests. ## Monitor Temporal Service errors Check for Temporal Service gRPC API errors. Note that Service API errors are not equivalent to guarantees mentioned in the [Temporal Cloud SLA](/cloud/sla). ### Reference Metrics - [temporal\_cloud\_v1\_frontend\_service\_error\_count](/cloud/metrics/openmetrics/metrics-reference#temporal_cloud_v1_service_error_count) - [temporal\_cloud\_v1\_frontend\_service\_request\_count](/cloud/metrics/openmetrics/metrics-reference#temporal_cloud_v1_service_request_count) ### Prometheus Query for this Metric Measure your daily average errors over 10-minute windows: ``` avg_over_time(( ( ( sum(increase(temporal_cloud_v1_service_request_count{temporal_namespace=~"$namespace", operation=~"StartWorkflowExecution|SignalWorkflowExecution|SignalWithStartWorkflowExecution|RequestCancelWorkflowExecution|TerminateWorkflowExecution"}[10m])) - sum(increase(temporal_cloud_v1_service_error_count{temporal_namespace=~"$namespace", operation=~"StartWorkflowExecution|SignalWorkflowExecution|SignalWithStartWorkflowExecution|RequestCancelWorkflowExecution|TerminateWorkflowExecution"}[10m])) ) / sum(increase(temporal_cloud_v1_service_request_count{temporal_namespace=~"$namespace", operation=~"StartWorkflowExecution|SignalWorkflowExecution|SignalWithStartWorkflowExecution|RequestCancelWorkflowExecution|TerminateWorkflowExecution"}[10m])) ) or vector(1) )[1d:10m]) ``` ## Detecting Activity and Workflow Failures The metrics `temporal_activity_execution_failed` and `temporal_cloud_v1_workflow_failed_count` together provide failure detection for Temporal applications. These metrics work in tandem to give you both granular component-level visibility and high-level workflow health insights. Note that `temporal_activity_execution_failed` is an SDK metric that must be collected from the Worker. ### Activity failure cascade If not using infinite retry policies, Activity failures can lead to Workflow failures: ``` Activity Failure --> Retry Logic --> More Activity Failures --> Workflow Decision --> Potential Workflow Failure ``` Activity failures are often recoverable and expected. Workflow failures represent terminal states requiring immediate attention. A spike in activity failures may precede workflow failures. Generally Temporal recommends that Workflows should be designed to always succeed. If an Activity fails more than its retry policy allows, we suggest having the Workflow handle Activity failure and take action to notify a human to take corrective action or be aware of the error. ### Ratio-based monitoring #### Failure conversion rate Monitor the ratio of workflow failures to activity failures: ``` workflow_failure_rate = temporal_cloud_v1_workflow_failed_count / temporal_activity_execution_failed ``` What to watch for: - High ratio (greater than 0.1): Poor error handling - activities failing are causing workflow failures - Low ratio (less than 0.01): Good resilience - activities fail but workflows recover - Sudden spikes: May indicate systematic issues #### Activity success rate ``` activity_success_rate = (total_activities - temporal_activity_execution_failed) / total_activities ``` Target: >95% for most applications. Lower success rate can be a sign of system troubles. See also: - [Crafting an Error Handling Strategy](https://learn.temporal.io/courses/errstrat/) - [Temporal Failures reference](/references/failures) - [Detecting Workflow failures](/encyclopedia/detecting-workflow-failures) ## Monitor replication lag for Namespaces with High Availability features Replication lag refers to the transmission delay of Workflow updates and history events from the primary Namespace to the replica. Always check the [metric replication lag](/cloud/metrics/openmetrics/metrics-reference#temporal_cloud_v1_replication_lag_p99) before initiating a failover. A forced failover when there is a large replication lag has a higher likelihood of rolling back Workflow progress. **Who owns the replication lag?** Temporal owns replication lag. **What guarantees are available?** There is no SLA for replication lag. Temporal recommends that customers do not trigger failovers except for testing or emergency situations. High Availability feature's four-9 guarantee SLA means Temporal will handle failovers and ensure high availability. Temporal also monitors replication lag. Customer who decide to trigger failovers should look at this metric before moving forward. **If the lag is high, what should you do?** We don't expect users to failover. Please contact Temporal support if you feel you have a pressing need. **Where can you read more?** See [operations and metrics](/cloud/high-availability) for Namespaces with High Availability features. ### Reference Metrics - [temporal\_cloud\_v1\_replication\_lag\_p99](/cloud/metrics/openmetrics/metrics-reference#temporal_cloud_v1_replication_lag_p99) - [temporal\_cloud\_v1\_replication\_lag\_p95](/cloud/metrics/openmetrics/metrics-reference#temporal_cloud_v1_replication_lag_p95) - [temporal\_cloud\_v1\_replication\_lag\_p50](/cloud/metrics/openmetrics/metrics-reference#temporal_cloud_v1_replication_lag_p50) ## Detecting Resource Exhaustion The Cloud metric `temporal_cloud_v1_resource_exhausted_error_count` is the primary indicator for Cloud-side throttling, signaling system limits are exceeded and `ResourceExhausted` gRPC errors are occurring. This generally does not break workflow processing due to how resources are prioritized. Persistent non-zero values of this metric are unexpected. ## Monitoring Trends Against Limits {#rps-aps-rate-limits} The set of [limit metrics](/cloud/metrics/openmetrics/metrics-reference#limit-metrics) provide a time series of values for limits. Use these metrics with their corresponding count metrics to monitor general trends against limits and set alerts when limits are exceeded. Use the corresponding throttle metrics to determine the severity of any active rate limiting. | Limit Metric | Count Metric | Throttle Metric | | ------------ | ------------ | --------------- | | `temporal_cloud_v1_action_limit` | `temporal_cloud_v1_total_action_count` | `temporal_cloud_v1_total_action_throttled_count` | | `temporal_cloud_v1_service_request_limit` | `temporal_cloud_v1_service_request_count` | `temporal_cloud_v1_service_request_throttled_count` | | `temporal_cloud_v1_operations_limit` | `temporal_cloud_v1_operations_count` | `temporal_cloud_v1_operations_throttled_count` | The [Grafana dashboard example](https://github.com/grafana/jsonnet-libs/blob/master/temporal-mixin/dashboards/temporal-overview.json) includes a Usage & Quotas section that creates demo charts for these limits and count metrics respectively. The limit metrics, throttle metrics, and count metrics are already directly comparable as per second rates. Keep in mind that each `count` metric is represented as a per second rate averaged over each minute. For example, to get the total count of Actions, you must multiply this metric by 60. When setting alerts against limits, consider if your workload is spiky or sensitive to throttling (e.g. does latency matter?). If your workload is sensitive, consider alerting for `temporal_cloud_v1_total_action_count` at a 50% threshold of the `temporal_cloud_v1_action_limit`. If your workload is not sensitive, consider an alert at 90% of this threshold or directly when throttling is detected as a value greater than zero for `temporal_cloud_v1_total_action_throttled_count`. This logic can also be used to automatically scale [Temporal Resource Units](/cloud/capacity-modes#provisioned-capacity) up or down as needed. Some workloads choose to exceed limits and accept throttling because they are not latency sensitive. --- ## tcld account command reference The `tcld account` commands manage accounts in Temporal Cloud. Alias: `a` - [tcld account audit-log](#audit-log) - [tcld account get](#get) - [tcld account list-regions](#list-regions) - [tcld account metrics](#metrics) ## audit-log The `tcld account audit-log` command manage Audit Logs in Temporal Cloud. Alias: `al` - [tcld account audit-log kinesis](#kinesis) - [tcld account audit-log pubsub](#pubsub) ### kinesis The `tcld account audit-log kinesis` command manages Kinesis audit log sinks. Alias: `k` - [tcld account audit-log kinesis create](#create) - [tcld account audit-log kinesis delete](#delete) - [tcld account audit-log kinesis get](#account-audit-log-kinesis-get) - [tcld account audit-log kinesis list](#list) - [tcld account audit-log kinesis update](#update) - [tcld account audit-log kinesis validate](#validate) #### create The `tcld account audit-log kinesis` command creates a Kinesis audit log sink. Alias: `c` ##### --destination-uri The destination URI of the audit log sink. Alias: `du` ##### --region The region to use for the request. Alias: `re` ##### --role-name The role name to use to write to the sink. Alias: `rn` ##### --sink-name Provide a name for the sink. #### delete The `tcld account audit-log kinesis delete` command deletes an audit log sink. Alias: `d` ##### --resource-version The resource-version (etag) to update from, if not set the cli will use the latest (optional). Alias: `v` ##### --sink-name Provide a name for the sink. #### get {#account-audit-log-kinesis-get} The `tcld account audit-log kinesis get` command gets an audit log sink. Alias: `g` ##### --sink-name Provide a name for the sink. #### list The `tcld account audit-log kinesis list` command lists audit log sinks on the account. Alias: `l` ##### --page-size The page size for list operations. ##### --page-token The page token for list operations. #### update The `tcld account audit-log kinesis update` command updates an audit log sink. Alias: `u` ##### --destination-uri The destination URI of the audit log sink. Alias: `du` ##### --enabled Whether the sink is enabled. ##### --region The region to use for the request. Alias: `re` ##### --resource-version The resource-version (etag) to update from, if not set the cli will use the latest (optional). Alias: `v` ##### --role-name The role name to use to write to the sink. Alias: `rn` ##### --sink-name Provide a name for the sink. #### validate The `tcld account audit-log kinesis validate` command verifies Temporal Cloud can write to a Kinesis sink. Alias: `v` ##### --destination-uri The destination URI of the audit log sink. Alias: `du` ##### --region The region to use for the request. Alias: `re` ##### --role-name The role name to use to write to the sink. Alias: `rn` ##### --sink-name Provide a name for the sink. ### pubsub The `tcld account audit-log pubsub` command manages Pub/Sub audit log sinks. Alias: `ps` - [tcld account audit-log pubsub create](#create) - [tcld account audit-log pubsub delete](#delete) - [tcld account audit-log pubsub get](#account-audit-log-pubsub-get) - [tcld account audit-log pubsub list](#list) - [tcld account audit-log pubsub update](#update) - [tcld account audit-log pubsub validate](#validate) #### create The `tcld account audit-log pubsub` command creates a Pub/Sub audit log sink. Alias: `c` ##### --service-account-email The service account email to impersonate to write to the sink. Alias: `sae` ##### --sink-name Provide a name for the sink. ##### --topic-name The topic name to write to the sink. Alias: `tn` #### delete The `tcld account audit-log pubsub delete` command deletes an audit log sink. Alias: `d` ##### --resource-version The resource-version (etag) to update from, if not set the cli will use the latest (optional). Alias: `v` ##### --sink-name Provide a name for the sink. #### get {#account-audit-log-pubsub-get} The `tcld account audit-log pubsub get` command gets an audit log sink. Alias: `g` ##### --sink-name Provide a name for the sink. #### list The `tcld account audit-log pubsub list` command lists audit log sinks on the account. Alias: `l` ##### --page-size The page size for list operations. ##### --page-token The page token for list operations. #### update The `tcld account audit-log pubsub update` command updates an audit log sink. Alias: `u` ##### --enabled Whether the sink is enabled. ##### --resource-version The resource-version (etag) to update from, if not set the cli will use the latest (optional). Alias: `v` ##### --service-account-email The service account email to impersonate to write to the sink. Alias: `sae` ##### --sink-name Provide a name for the sink. ##### --topic-name The topic name to write to the sink. Alias: `tn` #### validate The `tcld account audit-log pubsub validate` command verifies Temporal Cloud can write to a Pub/Sub sink. Alias: `v` ##### --service-account-email The service account email to impersonate to write to the sink. Alias: `sae` ##### --sink-name Provide a name for the sink. ##### --topic-name The topic name to write to the sink. Alias: `tn` ## get The `tcld account get` command gets information about the Temporal Cloud account you are logged into. Alias: `g` `tcld account get` The command has no modifiers. ## list-regions The `tcld account list-regions` lists all regions where the account can provision namespaces. Alias: `l` ## metrics The `tcld account metrics` commands configure the metrics endpoint for the Temporal Cloud account that is currently logged in. Alias: `m` - [tcld account metrics enable](#enable) - [tcld account metrics disable](#disable) - [tcld account metrics accepted-client-ca](#accepted-client-ca) ### accepted-client-ca The `tcld account metrics accepted-client-ca` commands manage the end-entity certificates for the metrics endpoint of the Temporal Cloud account that is currently logged in. :::info The end-entity certificates for the metrics endpoint must chain up to the CA certificate used for the account. For more information, see [Certificate requirements](/cloud/certificates#certificate-requirements). ::: Alias: `ca` - [tcld account metrics accepted-client-ca add](#add) - [tcld account metrics accepted-client-ca list](#list) - [tcld account metrics accepted-client-ca set](#set) - [tcld account metrics accepted-client-ca remove](#remove) #### add The `tcld account metrics accepted-client-ca add` command adds end-entity certificates to the metrics endpoint of a Temporal Cloud account. :::info The end-entity certificates for the metrics endpoint must chain up to the CA certificate used for the account. For more information, see [Certificate requirements](/cloud/certificates#certificate-requirements). ::: `tcld account metrics accepted-client-ca add --ca-certificate ` Alias: `a` The following modifiers control the behavior of the command. ##### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld account metrics accepted-client-ca add --request-id --ca-certificate ``` ##### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld account metrics accepted-client-ca add --resource-version --ca-certificate ``` ##### --ca-certificate _Required modifier unless `--ca-certificate-file` is specified_ Specify a base64-encoded string of a CA certificate PEM file. If both `--ca-certificate` and `--ca-certificate-file` are specified, only `--ca-certificate` is used. Alias: `-c` **Example** ```bash tcld account metrics accepted-client-ca add --ca-certificate ``` ##### --ca-certificate-file _Required modifier unless `--ca-certificate` is specified_ Specify a path to a CA certificate PEM file. If both `--ca-certificate` and `--ca-certificate-file` are specified, only `--ca-certificate` is used. Alias: `-f` **Example** ```bash tcld account metrics accepted-client-ca add --ca-certificate-file ``` #### list The `tcld account metrics accepted-client-ca list` command lists the end-entity certificates that are currently configured for the metrics endpoint of a Temporal Cloud account. `tcld account metrics accepted-client-ca list` Alias: `l` The command has no modifiers. #### remove The `tcld account metrics accepted-client-ca remove` command removes end-entity certificates from the metrics endpoint of a Temporal Cloud account. `tcld account metrics accepted-client-ca remove --ca-certificate ` Alias: `r` The following modifiers control the behavior of the command. ##### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld account metrics accepted-client-ca remove --request-id --ca-certificate ``` ##### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld account metrics accepted-client-ca remove --resource-version --ca-certificate ``` ##### --ca-certificate _Required modifier unless `--ca-certificate-fingerprint` or `--ca-certificate-file` is specified_ Specify a base64-encoded string of a CA certificate PEM file. If `--ca-certificate-fingerprint` is also specified, both `--ca-certificate` and `--ca-certificate-file` are ignored. If `--ca-certificate-file` is also specified but `--ca-certificate-fingerprint` is not, only `--ca-certificate` is used. Alias: `-c` **Example** ```bash tcld account metrics accepted-client-ca remove --ca-certificate ``` ##### --ca-certificate-file _Required modifier unless `--ca-certificate-fingerprint` or `--ca-certificate` is specified_ Specify a path to a CA certificate PEM file. If `--ca-certificate-fingerprint` is also specified, both `--ca-certificate-file` and `--ca-certificate` are ignored. If `--ca-certificate` is also specified but `--ca-certificate-fingerprint` is not, only `--ca-certificate` is used. Alias: `-f` **Example** ```bash tcld account metrics accepted-client-ca remove --ca-certificate-file ``` ##### --ca-certificate-fingerprint _Required modifier unless `--ca-certificate` or `--ca-certificate-file` is specified_ Specify the fingerprint of a CA certificate. If `--ca-certificate`, `--ca-certificate-file`, or both are also specified, they are ignored. Alias: `--fp` **Example** ```bash tcld account metrics accepted-client-ca remove --ca-certificate-fingerprint ``` #### set The `tcld account metrics accepted-client-ca set` command sets the end-entity certificates for the metrics endpoint of a Temporal Cloud account. :::info The end-entity certificates for the metrics endpoint must chain up to the CA certificate used for the account. For more information, see [Certificate requirements](/cloud/certificates#certificate-requirements). ::: `tcld account metrics accepted-client-ca set --ca-certificate ` Alias: `s` The following modifiers control the behavior of the command. ##### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld account metrics accepted-client-ca set --request-id --ca-certificate ``` ##### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld account metrics accepted-client-ca set --resource-version --ca-certificate ``` ##### --ca-certificate _Required modifier unless `--ca-certificate-file` is specified_ Specify a base64-encoded string of a CA certificate PEM file. If both `--ca-certificate` and `--ca-certificate-file` are specified, only `--ca-certificate` is used. Alias: `-c` **Example** ```bash tcld account metrics accepted-client-ca set --ca-certificate ``` ##### --ca-certificate-file _Required modifier unless `--ca-certificate` is specified_ Specify a path to a CA certificate PEM file. If both `--ca-certificate` and `--ca-certificate-file` are specified, only `--ca-certificate` is used. Alias: `-f` **Example** ```bash tcld account metrics accepted-client-ca set --ca-certificate-file ``` ### enable The `tcld account metrics enable` command enables the metrics endpoint for the Temporal Cloud account that is currently logged in. :::info The end-entity for the metrics endpoint _must_ be configured before the endpoint can be enabled. See the [tcld account metrics accepted-client-ca](#accepted-client-ca) commands. ::: `tcld account metrics enable` The command has no modifiers. ### disable The `tcld account metrics disable` command disables the metrics endpoint for the Temporal Cloud account that is currently logged in. `tcld account metrics disable` The command has no modifiers. --- ## tcld apikey command reference The `tcld apikey` commands manage API Keys in Temporal Cloud. Alias: `ak` - [tcld apikey create](#create) - [tcld apikey get](#get) - [tcld apikey list](#list) - [tcld apikey delete](#delete) - [tcld apikey disable](#disable) - [tcld apikey enable](#enable) ## create The `tcld apikey create` command creates an API Key in Temporal Cloud. `tcld apikey create --name --description --duration --expiry --request-id ` The following options control the behavior of the command. #### --name _Required modifier_ Specify the display name of the API Key. Alias: `-n` **Example** ```bash tcld apikey create --name ``` #### --description Specify a description for the API Key. Alias: `-desc` **Example** ```bash tcld apikey create --name --description "Your API Key" ``` #### --duration Specify the duration from now when the API Key will expire. This will be ignored if the expiry flag is set. Example format: `24h` (default: 0s). Alias: `-d` **Example** ```bash tcld apikey create --name --duration 24h ``` #### --expiry Specify the absolute timestamp (RFC3339) when the API Key will expire. Example: `2023-11-28T09:23:24-08:00`. Alias: `-e` **Example** ```bash tcld apikey create --name --expiry '2023-11-28T09:23:24-08:00' ``` #### --request-id Specify a request-id for the asynchronous operation. If not set, the server will assign one. Alias: `-r` **Example** ```bash tcld apikey create --name --request-id ``` ## get The `tcld apikey get` command retrieves the details of a specified API Key in Temporal Cloud. `tcld apikey get --id ` The following option controls the behavior of the command. #### --id _Required modifier_ Specify the ID of the API Key to retrieve. Alias: `-i` **Example** ```bash tcld apikey get --id ``` ## list The `tcld apikey list` command lists all API Keys in Temporal Cloud. `tcld apikey list` This command does not require any specific options. Alias: `l` **Example** ```bash tcld apikey list ``` ## delete The `tcld apikey delete` command deletes an API Key in Temporal Cloud. `tcld apikey delete --id [--resource-version ] [--request-id ]` The following options control the behavior of the command. #### --id _Required modifier_ Specify the ID of the API Key to delete. Alias: `-i` **Example** ```bash tcld apikey delete --id ``` #### --resource-version Specify the resource-version (etag) to update from. If not set, the CLI will use the latest. Alias: `-v` **Example** ```bash tcld apikey delete --id --resource-version ``` #### --request-id Specify a request-id for the asynchronous operation. If not set, the server will assign one. Alias: `-r` **Example** ```bash tcld apikey delete --id --request-id ``` ## disable The `tcld apikey disable` command disables an API Key in Temporal Cloud. `tcld apikey disable --id [--resource-version ] [--request-id ]` The following options control the behavior of the command. #### --id _Required modifier_ Specify the ID of the API Key to disable. Alias: `-i` **Example** ```bash tcld apikey disable --id ``` #### --resource-version Specify the resource-version (etag) to update from. If not set, the CLI will use the latest. Alias: `-v` **Example** ```bash tcld apikey disable --id --resource-version ``` #### --request-id Specify a request-id for the asynchronous operation. If not set, the server will assign one. Alias: `-r` **Example** ```bash tcld apikey disable --id --request-id ``` ## enable The `tcld apikey enable` command enables a disabled API Key in Temporal Cloud. `tcld apikey enable --id [--resource-version ] [--request-id ]` The following options control the behavior of the command. #### --id _Required modifier_ Specify the ID of the API Key to enable. Alias: `-i` **Example** ```bash tcld apikey enable --id ``` #### --resource-version Specify the resource-version (etag) to update from. If not set, the CLI will use the latest. Alias: `-v` **Example** ```bash tcld apikey enable --id --resource-version ``` #### --request-id Specify a request-id for the asynchronous operation. If not set, the server will assign one. Alias: `-r` **Example** ```bash tcld apikey enable --id --request-id ``` --- ## tcld connectivity-rule command reference The `tcld connectivity-rule` commands manage [connectivity rules](/cloud/connectivity#connectivity-rules) in Temporal Cloud. Alias: `cr` - [tcld connectivity-rule create](#create) - [tcld connectivity-rule delete](#delete) - [tcld connectivity-rule get](#get) - [tcld connectivity-rule list](#list) ## create The `tcld connectivity-rule create` command creates a connectivity rule. Alias: `c` #### --connection-id The connection ID of the private connection. Alias: `ci` #### --connectivity-type The type of connectivity, currently only support 'private' and 'public'. Alias: `ct` #### --gcp-project-id The GCP project ID of the connection, required if the cloud provider is 'gcp'. Alias: `gpi` #### --region The region of the connection. Alias: `r` ## delete The `tcld connectivity-rule delete` command deletes a connectivity rule. Alias: `d` #### --connectivity-rule-id The connectivity rule ID. Alias: `id` ## get The `tcld connectivity-rule get` command gets a connectivity rule. Alias: `g` #### --connectivity-rule-id The connectivity rule ID. Alias: `id` ## list The `tcld connectivity-rule list` command lists connectivity rules. Alias: `l` #### --namespace The namespace hosted on temporal cloud. Alias: `n` --- ## tcld feature command reference The `tcld feature` commands manage features in Temporal Cloud. Alias: `f` - [tcld feature get](#get) - [tcld feature toggle](#toggle) ## get The `tcld feature get` command gets information about the Temporal Cloud features you've enabled. Alias: `g` `tcld feature get` The command has no modifiers. **Example** `tcld feature get` The following is an example output: ```json [ { "Name": "enable-apikey", "Value": true } ] ``` ## toggle The `tcld feature toggle-*` command turns on or off the `*` feature in Temporal Cloud. :::note The `*` symbol represents the name of the feature. Replace `*` with the name of the available feature to toggle. ::: Alias: `tak` `tcld feature toggle-*` The command has no modifiers. **Example** `tcld feature toggle-apikey` The following is an example output: ```json Feature flag enable-apikey is now true ``` :::note The feature `apikey` is an example. Update the feature name to toggle a different feature. ::: --- ## tcld generate-certificates command reference The `tcld generate-certificates` commands generate certificate authority (CA) and end-entity TLS certificates for Temporal Cloud. Alias: `gen` - [tcld generate-certificates certificate-authority-certificate](#certificate-authority-certificate) - [tcld generate-certificates end-entity-certificate](#end-entity-certificate) ## tcld generate-certificates certificate-authority-certificate {#certificate-authority-certificate} The `tcld generate-certificates certificate-authority-certificate` command generates certificate authority (CA) certificates for Temporal Cloud. `tcld generate-certificates certificate-authority-certificate ` Alias: `ca` The following modifiers control the behavior of the command. #### --organization Specify an organization name for certificate generation. Alias: `--org` **Example** ```bash tcld generate-certificates certificate-authority-certificate --organization ``` #### --validity-period Specify the duration for which the certificate is valid. Format values as d/h (for example, `30d10h` for a certificate lasting 30 days and 10 hours). Alias: `-d` **Example** ```bash tcld generate-certificates certificate-authority-certificate --validity-period ``` #### --ca-certificate-file Specify a path to a `.pem` file where the generated X.509 certificate file will be stored. Alias: `--ca-cert` **Example** ```bash tcld generate-certificates certificate-authority-certificate --ca-certificate-file ``` #### --ca-key-file Specify a path to a `.key` file where the certificate's private key will be stored. Alias: `--ca-key` **Example** ```bash tcld generate-certificates certificate-authority-certificate --ca-key-file ``` #### --rsa-algorithm When enabled, a 4096-bit RSA key pair is generated for the certificate instead of an ECDSA P-384 key pair. Because an ECDSA P-384 key pair is the recommended default, this option is disabled. Alias: `--rsa` **Example** ```bash tcld generate-certificates certificate-authority-certificate --rsa-algorithm ``` ## tcld generate-certificates end-entity-certificate {#end-entity-certificate} The `tcld generate-certificates end-entity-certificate` command generates end-entity (leaf) certificates for Temporal Cloud. `tcld generate-certificates end-entity-certificate ` Alias: `leaf` The following modifiers control the behavior of the command. #### --organization Specify an organization name for certificate generation. Alias: `--org` **Example** ```bash tcld generate-certificates end-entity-certificate --organization ``` #### --organization-unit Optional: Specify the name of the organization unit. **Example** ```bash tcld generate-certificates end-entity-certificate --organization-unit ``` #### --validity-period Specify the duration for which the certificate is valid. Format values as d/h (for example, `30d10h` for a certificate lasting 30 days and 10 hours). Alias: `-d` **Example** ```bash tcld generate-certificates end-entity-certificate --validity-period ``` #### --ca-certificate-file Specify the path of the X.509 CA certificate in a `.pem` file for the certificate authority. Alias: `--ca-cert` **Example** ```bash tcld generate-certificates end-entity-certificate --ca-certificate-file ``` #### --ca-key-file Specify the path of the private key in a `.key` file for the certificate authority. Alias: `--ca-key` **Example** ```bash tcld generate-certificates end-entity-certificate --ca-key-file ``` #### --certificate-file Specify a path to a `.pem` file where the generated X.509 leaf certificate file will be stored. Alias: `--cert` **Example** ```bash tcld generate-certificates end-entity-certificate --certificate-file ``` #### --key-file Specify a path to a `.key` file where the leaf certificate's private key will be stored. Alias: `--key` **Example** ```bash tcld generate-certificates end-entity-certificate --key-file ``` --- ## tcld command reference The Temporal Cloud CLI (tcld) is a command-line tool that you can use to interact with Temporal Cloud. - [How to install tcld](#install-tcld) ### tcld commands - [tcld account](/cloud/tcld/account) - [tcld apikey](/cloud/tcld/apikey) - [tcld connectivity-rule](/cloud/tcld/connectivity-rule) - [tcld feature](/cloud/tcld/feature) - [tcld generate-certificates](/cloud/tcld/generate-certificates) - [tcld login](/cloud/tcld/login) - [tcld logout](/cloud/tcld/logout/) - [tcld namespace](/cloud/tcld/namespace) - [tcld nexus](/cloud/tcld/nexus) - [tcld request](/cloud/tcld/request) - [tcld user](/cloud/tcld/user) - [tcld version](/cloud/tcld/version/) ### Global modifiers #### --auto_confirm Automatically confirm all prompts. You can specify the value for this modifier by setting the AUTO_CONFIRM environment variable. The default value is `false`. ## How to install tcld {#install-tcld} You can install [tcld](/cloud/tcld) in two ways. ### Install tcld by using Homebrew ```bash brew install temporalio/brew/tcld ``` ### Build tcld from source 1. Verify that you have Go 1.18 or later installed. ```bash go version ``` If Go 1.18 or later is not installed, follow the [Download and install](https://go.dev/doc/install) instructions on the Go website. 1. Clone the tcld repository and run make. ```bash git clone https://github.com/temporalio/tcld.git cd tcld make ``` 1. Copy the tcld executable to any directory that appears in the PATH environment variable, such as `/usr/local/bin`. ```bash cp tcld /usr/local/bin/tcld ``` 1. Verify that tcld is installed. ```bash tcld version ``` --- ## tcld login command reference The `tcld login` command logs in a user to Temporal Cloud. Follow instructions in the browser to log in to your Temporal account. Alias: `l` `tcld login` The command has no modifiers. --- ## tcld logout command reference The `tcld logout` command logs a user out of Temporal Cloud. Alias: `lo` `tcld logout` The following modifier controls the behavior of the command. #### --disable-pop-up Disables a browser pop-up if set to `true`. The default value is `false`. --- ## tcld namespace command reference The `tcld namespace` commands enable [Namespace](/namespaces) operations in Temporal Cloud. Alias: `n` :::info Namespace ID Format The `--namespace` flag accepts a **Namespace ID** in the format `.` (e.g., `your-namespace.a1b2c`). This is the full identifier shown in Temporal Cloud, not just the [Namespace Name](/cloud/namespaces#temporal-cloud-namespace-name). You can find your account suffix in the Temporal Cloud UI. ::: - [tcld namespace add-region](#add-region) - [tcld namespace create](#create) - [tcld namespace delete](#delete) - [tcld namespace failover](#failover) - [tcld namespace get](#get) - [tcld namespace list](#list) - [tcld namespace export](#export) - [tcld namespace accepted-client-ca](#accepted-client-ca) - [tcld namespace certificate-filters](#certificate-filters) - [tcld namespace search-attributes](#search-attributes) - [tcld namespace retention](#retention) - [tcld namespace update-codec-server](#update-codec-server) - [tcld namespace update-high-availability](#update-high-availability) - [tcld namespace tags](#tags) - [tcld namespace set-connectivity-rules](#set-connectivity-rules) ## add-region Use `tcld namespace add-region` to add a region to an existing Temporal Cloud [Namespace](/namespaces), upgrading it to support [High Availability](/cloud/high-availability). See [Regions](/cloud/regions) for available regions and their supported replication options. The following modifiers control the behavior of the command. #### --request-id The request identifier to use for the asynchronous operation. If not set, the server assigns an identifier. Alias: `-r` #### --namespace **Required.** Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable `$TEMPORAL_CLOUD_NAMESPACE` is used. Alias: `-n` #### --region **Required.** The region to add to the existing Namespace. See [Regions](/cloud/regions) for a list of supported regions. :::tip Choosing Replica Regions See [Regions](/cloud/regions) for available regions and their supported replication options. See [High Availability](/cloud/high-availability) to learn how replication and failover work. ::: Alias: `--re` **Example** ```bash tcld namespace add-region \ --namespace \ --region ``` Specify the region code of the region where you want to create the replica as an argument to the `--region` flag. See [High Availability](/cloud/high-availability) for details on same-region, multi-region, and multi-cloud replication options. Temporal Cloud sends an email alert once your Namespace is ready for use. #### --cloud-provider The cloud provider of the region. One of [`aws`, `gcp`]. Default: `aws` ## create The `tcld namespace create` command creates a Temporal [Namespace](/namespaces) in Temporal Cloud. Alias: `c` The following modifiers control the behavior of the command. #### --namespace **Required.** The name for the new Namespace. This becomes part of the Namespace ID (`.`). Alias: `-n` #### --region **Required.** The cloud provider region to create the Namespace in. Supply one `--region` for a standard Namespace, or two for a Namespace with [High Availability](/cloud/high-availability). See [Regions](/cloud/regions) for available regions and their supported replication options. Alias: `--re` #### --auth-method The authentication method for the Namespace. One of [`mtls`, `api_key`]. - `mtls` (default): Requires `--ca-certificate` or `--ca-certificate-file` - `api_key`: No other modifiers **Example** ```bash tcld namespace create \ --namespace test-namespace.a1b2c \ --region us-east-1 \ --auth-method api_key ``` #### --ca-certificate A base64-encoded [CA certificate](/cloud/certificates). If both `--ca-certificate` and `--ca-certificate-file` are specified, only `--ca-certificate` is used. Alias: `-c` #### --ca-certificate-file A path to a [CA certificate](/cloud/certificates) PEM file. If both options are specified, only `--ca-certificate` is used. Alias: `--cf` #### --certificate-filter-file Path to a JSON file that defines the [certificate filters](/cloud/certificates#manage-certificate-filters) to be applied to the Namespace. Sample JSON: `{ "filters": [ { "commonName": "test1" } ] }` If both `--certificate-filter-file` and `--certificate-filter-input` are specified, the command returns an error. Alias: `--cff` #### --certificate-filter-input A JSON string that defines the [certificate filters](/cloud/certificates#manage-certificate-filters) to be applied to the Namespace. Sample JSON: `{ "filters": [ { "commonName": "test1" } ] }` If both `--certificate-filter-input` and `--certificate-filter-file` are specified, the command returns an error. Alias: `--cfi` #### --cloud-provider The cloud provider of the region. One of [`aws`, `gcp`]. Default: `aws` Alias: `--cp` #### --connectivity-rule-ids A list of [connectivity rule](/cloud/connectivity#connectivity-rules) IDs to apply to the Namespace. Can be specified more than once. Alias: `--ids` **Example** ```bash tcld namespace create \ --namespace test-namespace.a1b2c \ --region us-east-1 \ --auth-method api_key \ --connectivity-rule-ids \ --connectivity-rule-ids ``` #### --enable-delete-protection Enable [delete protection](/cloud/namespaces#delete-protection) on the Namespace. Default: `false` Alias: `--edp` #### --endpoint The [codec server](/production-deployment/data-encryption) endpoint to decode payloads for all users interacting with this Namespace. Must be HTTPS. Alias: `-e` #### --include-credentials Include cross-origin credentials when calling the [codec server](/production-deployment/data-encryption). Default: `false` Alias: `--ic` #### --pass-access-token Pass the user access token to the [codec server](/production-deployment/data-encryption) endpoint. Default: `false` Alias: `--pat` #### --request-id The request identifier to use for the asynchronous operation. If not set, the server assigns an identifier. Alias: `-r` #### --retention-days The [retention period](/temporal-service/temporal-server#retention-period) in days for closed Workflow Executions. Default: `30` Alias: `--rd` #### --search-attribute A custom [Search Attribute](/search-attribute) in the form '_name_=_type_'. Can be specified more than once. Valid values for _type_: `Bool` | `Datetime` | `Double` | `Int` | `Keyword` | `Text` Alias: `--sa` **Example** ```bash tcld namespace create \ --namespace test-namespace.a1b2c \ --region us-east-1 \ --auth-method api_key \ --search-attribute "customer_id=Int" \ --search-attribute "customer_name=Text" ``` #### --tag A [tag](/cloud/namespaces#tag-a-namespace) in the form "_key_=_value_". Can be specified more than once. See [Tag structure and limits](/cloud/namespaces#tag-structure-and-limits). Alias: `--t` **Example** ```bash tcld namespace create \ --namespace test-namespace.a1b2c \ --region us-east-1 \ --auth-method api_key \ --tag "key=value" \ --tag "key2=value2" ``` #### --user-namespace-permission A [Namespace-level permission](/cloud/users#namespace-level-permissions) for a user in the form '_email_=_permission_'. Can be specified more than once. Valid values for _permission_: `Admin` | `Write` | `Read` Alias: `-p` **Example** ```bash tcld namespace create \ --namespace test-namespace.a1b2c \ --region us-east-1 \ --auth-method api_key \ --user-namespace-permission "user@example.com=Admin" \ --user-namespace-permission "user2@example.com=Write" ``` ## delete The `tcld namespace delete` command deletes the specified [Namespace](/namespaces) in Temporal Cloud. Alias: `d` `tcld namespace delete` The following modifiers control the behavior of the command. #### --namespace **Required.** Specify the Namespace hosted on Temporal Cloud to be deleted. Alias: `-n` #### --request-id The request identifier to use for the asynchronous operation. If not set, the server assigns an identifier. Alias: `-r` #### --resource-version A resource version (ETag) to update from. If not set, the CLI uses the latest. Alias: `-v` **Example** ```bash tcld namespace delete \ --namespace ``` ## delete-region Use `tcld namespace delete-region` to remove a for an existing Temporal Cloud [Namespace](/namespaces). Removing a replica disables [High Availability features](/cloud/high-availability) and results in a mandatory 7-day waiting period before you can re-enable High Availability features in the same location. Refer to [Enable High Availability](/cloud/high-availability/enable) for more information. The following modifiers control the behavior of the command. #### --request-id The request identifier to use for the asynchronous operation. If not set, the server assigns an identifier. Alias: `-r` #### --namespace **Required.** Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable `$TEMPORAL_CLOUD_NAMESPACE` is used. Alias: `-n` #### --region **Required.** The region to remove from the Namespace. Upon removal, Temporal stops replication and the Namespace becomes a Standard Namespace. You cannot re-add a region or add a new region for seven days after removing a Namespace region. Alias: `--re` **Example** ```bash tcld namespace delete-region \ --namespace \ --region ``` When using API key authentication, add your API credentials before pressing Enter: ```bash tcld --api-key \ delete-region \ --namespace \ --region ``` #### --cloud-provider The cloud provider of the region to failover to. One of [aws, gcp]. Default: aws (default: "aws") ## failover Failover a Temporal Namespace with [High Availability features](/cloud/high-availability). A failover switches a Namespace region from a primary Namespace to its replica. **Example** ```bash tcld namespace failover \ --namespace \ --region ``` When using API key authentication, add your API credentials before pressing Enter: ```bash tcld --api-key \ namespace failover \ --namespace \ --region ``` #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` #### --namespace **Required.** Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable `$TEMPORAL_CLOUD_NAMESPACE` is used. Alias: `-n` #### --region **Required.** The region to failover _to_. See [Regions](/cloud/regions) for a list of supported regions. Alias: `--re` #### --ca-certificate _Required modifier unless `--ca-certificate-file` is specified_. A base64-encoded CA certificate. If both `--ca-certificate` and `--ca-certificate-file` are specified, only `--ca-certificate` is used. Alias: `-c` #### --cloud-provider The cloud provider of the region to failover to. One of [aws, gcp]. Default: aws (default: "aws") ## get The `tcld namespace get` command gets information about the specified [Namespace](/namespaces) in Temporal Cloud. Alias: `g` `tcld namespace get` The following modifier controls the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace get \ --namespace ``` ## list The `tcld namespace list` command lists all [Namespaces](/namespaces) in Temporal Cloud. Alias: `l` `tcld namespace list` The command has no modifiers. ## export The `tcld namespace export s3` commands manage Workflow History Exports. Valid options: `s3` Alias: `es` - [tcld namespace export s3 create](#create) - [tcld namespace export s3 get](#get) - [tcld namespace export s3 delete](#delete) - [tcld namespace export s3 list](#list) - [tcld namespace export s3 update](#update) - [tcld namespace export s3 validate](#validate) ### create The `tcld namespace export s3 create` command allows users to create an export sink for the Namespace of a Temporal Cloud account. **Example** ```bash tcld namespace export s3 create \ --namespace \ --sink-name \ --s3-bucket-name \ --role-arn ``` The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` #### --sink-name Provide a name for the export sink. _Required modifier_ #### --role-arn Provide role arn for the IAM Role. _Required modifier_ #### --s3-bucket-name Provide the name of an AWS S3 bucket that Temporal will send closed workflow histories to. _Required modifier_ #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` #### --kms-arn Provide the ARN of the KMS key to use for encryption. Note: If the KMS ARN needs to be added or updated, users should create the IAM Role with KMS or modify the created IAM Role accordingly. Providing it as part of the input won't help. ### get The `tcld namespace export s3 get` command allows users to retrieve details about an existing export sink from the Namespace of a Temporal Cloud account. **Example** ```bash tcld namespace export s3 get \ --namespace \ --sink-name ``` The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` #### --sink-name Provide the name of the export sink you wish to retrieve details for. _Required modifier_ ### delete The `tcld namespace export s3 delete` command allows users to delete an existing export sink from the Namespace of a Temporal Cloud account. **Example** ```bash tcld namespace export s3 delete \ --namespace \ --sink-name ``` The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` #### --sink-name Provide the name of the export sink you wish to delete. _Required modifier_ #### --resource-version Specify a resource version (ETag) to delete from. If not specified, the CLI will use the latest version. Alias: `-v` #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` ### list The `tcld namespace export s3 list` command allows users to list all existing export sinks within the Namespace of a Temporal Cloud account. **Example** ```bash tcld namespace export s3 list \ --namespace ``` The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` #### --page-size Determine the number of results to return per page for list operations. If not specified, the default value is 100. #### --page-token Provide the page token to continue listing results from where the previous list operation left off. ### update The `tcld namespace export s3 update` command allows users to modify the details of an existing export sink within the Namespace of a Temporal Cloud account. **Example** ```bash tcld namespace export s3 update \ --namespace \ --sink-name \ --enabled true ``` The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` #### --sink-name Provide the name of the export sink you wish to update. _Required modifier_ #### --enabled Specify whether the export is enabled or not. #### --role-arn Update the role ARN for the IAM Role. #### --s3-bucket-name Update the name of the AWS S3 bucket that Temporal will send closed workflow histories to. #### --resource-version Specify a resource version (ETag) to update from. If not specified, the CLI will use the latest version. Alias: `-v` #### --kms-arn Update the ARN of the KMS key used for encryption. Note: If the KMS ARN needs to be added or updated, users should create the IAM Role with KMS or modify the created IAM Role accordingly. Providing it as part of the input won't help. #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` ### validate The `tcld namespace export s3 validate` command allows users to validate an export sink from the Namespace of a Temporal Cloud account. **Example** ```bash tcld namespace export s3 validate \ --namespace \ --sink-name \ --s3-bucket-name \ --role-arn ``` The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` #### --sink-name Provide the name of the export sink you wish to retrieve details for. _Required modifier_ #### --role-arn Provide role arn for the IAM Role. _Required modifier_ #### --s3-bucket-name Update the name of the AWS S3 bucket that Temporal will send closed workflow histories to. #### --kms-arn Update the ARN of the KMS key used for encryption. Note: If the KMS ARN needs to be added or updated, users should create the IAM Role with KMS or modify the created IAM Role accordingly. Providing it as part of the input won't help. ## accepted-client-ca The `tcld namespace accepted-client-ca` commands manage the client CA certificates of the specified [Namespace](/namespaces) in Temporal Cloud. The certificates are used to verify client connections. :::note Base64 versions of the CA certificate files are accepted by these commands. ::: Alias: `ca` - [tcld namespace accepted-client-ca add](#add) - [tcld namespace accepted-client-ca list](#list) - [tcld namespace accepted-client-ca set](#set) - [tcld namespace accepted-client-ca remove](#remove) :::important Do not use a CA certificate that is signed with an insecure signature algorithm, such as SHA-1. Such signatures will be rejected. Existing CA certificates that use SHA-1 can stop working without warning. For more information about the vulnerabilities of SHA-1, see [SHAttered](https://shattered.io/). ::: ### add The `tcld namespace accepted-client-ca add` command adds client CA certificates to a [Namespace](/namespaces) in Temporal Cloud. `tcld namespace accepted-client-ca add --ca-certificate ` Alias: `a` The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace accepted-client-ca add \ --namespace \ --ca-certificate ``` #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld namespace accepted-client-ca add \ --request-id \ --ca-certificate ``` #### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld namespace accepted-client-ca add \ --resource-version \ --ca-certificate ``` #### --ca-certificate _Required modifier unless `--ca-certificate-file` is specified_ Specify a base64-encoded string of a CA certificate PEM file. If both `--ca-certificate` and `--ca-certificate-file` are specified, only `--ca-certificate` is used. Alias: `-c` **Example** ```bash tcld namespace accepted-client-ca add \ --ca-certificate ``` #### --ca-certificate-file _Required modifier unless `--ca-certificate` is specified_ Specify a path to a CA certificate PEM file. If both `--ca-certificate` and `--ca-certificate-file` are specified, only `--ca-certificate` is used. Alias: `-f` **Example** ```bash tcld namespace accepted-client-ca add \ --ca-certificate-file ``` ### list The `tcld namespace accepted-client-ca list` command lists the client CA certificates that are currently configured for a [Namespace](/namespaces) in Temporal Cloud. `tcld namespace accepted-client-ca list` Alias: `l` The following modifier controls the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace accepted-client-ca list \ --namespace ``` ### remove The `tcld namespace accepted-client-ca remove` command removes client CA certificates from a [Namespace](/namespaces) in Temporal Cloud. `tcld namespace accepted-client-ca remove --ca-certificate ` Alias: `r` The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace accepted-client-ca remove \ --namespace \ --ca-certificate ``` #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld namespace accepted-client-ca remove \ --request-id \ --ca-certificate ``` #### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld namespace accepted-client-ca remove \ --resource-version \ --ca-certificate ``` #### --ca-certificate _Required modifier unless `--ca-certificate-fingerprint` or `--ca-certificate-file` is specified_ Specify the base64-encoded string of a CA certificate PEM file. If `--ca-certificate-fingerprint` is also specified, both `--ca-certificate` and `--ca-certificate-file` are ignored. If `--ca-certificate-file` is also specified but `--ca-certificate-fingerprint` is not, only `--ca-certificate` is used. Alias: `-c` **Example** ```bash tcld namespace accepted-client-ca remove \ --ca-certificate ``` #### --ca-certificate-file _Required modifier unless `--ca-certificate-fingerprint` or `--ca-certificate` is specified_ Specify a path to a CA certificate PEM file. If `--ca-certificate-fingerprint` is also specified, both `--ca-certificate-file` and `--ca-certificate` are ignored. If `--ca-certificate` is also specified but `--ca-certificate-fingerprint` is not, only `--ca-certificate` is used. Alias: `-f` **Example** ```bash tcld namespace accepted-client-ca remove \ --ca-certificate-file ``` #### --ca-certificate-fingerprint _Required modifier unless `--ca-certificate` or `--ca-certificate-file` is specified_ Specify the fingerprint of a CA certificate. If `--ca-certificate`, `--ca-certificate-file`, or both are also specified, they are ignored. Alias: `--fp` **Example** ```bash tcld namespace accepted-client-ca remove \ --ca-certificate-fingerprint ``` ### set The `tcld namespace accepted-client-ca set` command sets the client CA certificates for a [Namespace](/namespaces) in Temporal Cloud. `tcld namespace accepted-client-ca set --ca-certificate ` Alias: `s` {/* How to rollover accepted client CA certificates in Temporal Cloud using tcld */} When updating CA certificates, it's important to follow a rollover process. Doing so enables your Namespace to serve both CA certificates for a period of time until traffic to your old CA certificate ceases. 1. Create a single file that contains both your old and new CA certificate PEM blocks. Just concatenate the PEM blocks on adjacent lines. ``` -----BEGIN CERTIFICATE----- ... old CA cert ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... new CA cert ... -----END CERTIFICATE----- ``` 1. Run the `tcld namespace accepted-client-ca set` command with the CA certificate bundle file. ```bash tcld namespace accepted-client-ca set \ --ca-certificate-file ``` 1. Monitor traffic to your old certificate until it ceases. 1. Create another file that contains only the new CA certificate. 1. Run the `tcld namespace accepted-client-ca set` command again with the updated CA certificate bundle file. The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace accepted-client-ca set \ --namespace --ca-certificate ``` #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld namespace accepted-client-ca set \ --request-id \ --ca-certificate ``` #### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld namespace accepted-client-ca set \ --resource-version \ --ca-certificate ``` #### --ca-certificate _Required modifier unless `--ca-certificate-file` is specified_ Specify a base64-encoded string of a CA certificate PEM file. If both `--ca-certificate` and `--ca-certificate-file` are specified, only `--ca-certificate` is used. Alias: `-c` **Example** ```bash tcld namespace accepted-client-ca set \ --ca-certificate ``` #### --ca-certificate-file _Required modifier unless `--ca-certificate` is specified_ Specify a path to a CA certificate PEM file. If both `--ca-certificate` and `--ca-certificate-file` are specified, only `--ca-certificate` is used. Alias: `-f` **Example** ```bash tcld namespace accepted-client-ca set \ --ca-certificate-file ``` ## certificate-filters The `tcld namespace certificate-filters` commands manage optional certificate filters for the specified [Namespace](/namespaces) in Temporal Cloud. The Namespace can use certificate filters to authorize client certificates based on distinguished name (DN) fields. Alias: `cf` - [tcld namespace certificate-filters import](#import) - [tcld namespace certificate-filters export](#export) - [tcld namespace certificate-filters clear](#clear) ### add The `tcld namespace certificates-filter add` command adds additional certificate filters to the Namespace of a Temporal Cloud account. The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace certificate-filters add \ --namespace \ --certificate-filter-file ``` #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld namespace certificate-filters add \ --request-id \ --certificate-filter-file ``` #### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld namespace certificate-filters add \ --resource-version \ --certificate-filter-file ``` #### --certificate-filter-file _Required modifier unless `--certificate-filter-value` is specified._ Specify a path to a JSON file defining the certificate filters for the Namespace. Aliases: `-f`, `--file` **Example** ```bash tcld namespace certificate-filters add \ --certificate-filter-file ``` #### --certificate-filter-input _Required modifier unless `--certificate-filter-file` is specified._ The certificate filters, in JSON, that will be added to the Namespace. Aliases: `-i`, `--input` **Example** ```bash tcld namespace certificate-filters add \ --certificate-filter-input ``` ### clear The `tcld namespace certificate-filters clear` command clears all certificate filters from a [Namespace](/namespaces) in Temporal Cloud. :::caution Using this command allows _any_ client certificate that chains up to a configured CA certificate to connect to the Namespace. ::: `tcld namespace certificate-filters clear` The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace certificate-filters clear \ --namespace ``` #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld namespace certificate-filters clear --request-id ``` #### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld namespace certificate-filters clear \ --resource-version ``` ### export The `tcld namespace certificate-filters export` command exports existing certificate filters from a [Namespace](/namespaces) in Temporal Cloud. `tcld namespace certificate-filters export --certificate-filter-file ` Alias: `exp` The following modifiers control the behavior of the command. #### --certificate-filter-file Specify a path to a JSON file where tcld can export the certificate filters. Aliases: `--file`, `-f` **Example** ```bash tcld namespace certificate-filters export \ --certificate-filter-file ``` #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace certificate-filters import \ --namespace \ --certificate-filter-input ``` #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld namespace certificate-filters import \ --request-id \ --certificate-filter-input ``` #### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld namespace certificate-filters import \ --resource-version \ --certificate-filter-input ``` ### import The `tcld namespace certificate-filters import` command sets certificate filters for a [Namespace](/namespaces) in Temporal Cloud. `tcld namespace certificate-filters import --certificate-filter-file ` Alias: `imp` A certificate filter can include any combination (and at least one) of the following: - `commonName` - `organization` - `organizationalUnit` - `subjectAlternativeName` The following modifiers control the behavior of the command. #### --certificate-filter-file _Required modifier unless `--certificate-filter-input` is specified_ Specify a path to a JSON file that defines certificate filters to be applied to the Namespace, such as `{ "filters": [ { "commonName": "test1" } ] }`. The specified filters replace any existing filters. If both `--certificate-filter-file` and `--certificate-filter-input` are specified, the command returns an error. Aliases: `--file`, `-f` **Example** ```bash tcld namespace certificate-filters import \ --certificate-filter-file ``` #### --certificate-filter-input _Required modifier unless `--certificate-filter-file` is specified_ Specify a JSON string that defines certificate filters to be applied to the Namespace, such as `{ "filters": [ { "commonName": "test1" } ] }`. The specified filters replace any existing filters. If both `--certificate-filter-input` and `--certificate-filter-file` are specified, the command returns an error. Aliases: `--input`, `-i` **Example** ```bash tcld namespace certificate-filters import \ --certificate-filter-input ``` #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace certificate-filters import \ --namespace \ --certificate-filter-input ``` #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld namespace certificate-filters import \ --request-id \ --certificate-filter-input ``` #### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld namespace certificate-filters import \ --resource-version \ --certificate-filter-input ``` ## search-attributes The `tcld namespace search-attributes` commands manage [Search Attributes](/search-attribute) of the specified [Namespace](/namespaces) in Temporal Cloud. Alias: `sa` - [tcld namespace search-attributes add](#add) - [tcld namespace search-attributes rename](#rename) If you wish to delete a Search Attribute, please contact [Support](/cloud/support) at [support.temporal.io](https://support.temporal.io). ### add The `tcld namespace search-attributes add` command adds custom [Search Attributes](/search-attribute) to a Namespace in Temporal Cloud. `tcld namespace search-attributes add --search-attribute ` Alias: `a` The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace search-attributes add \ --namespace \ --search-attribute ``` #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld namespace search-attributes add \ --request-id \ --search-attribute ``` #### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld namespace search-attributes add \ --resource-version \ --search-attribute ``` #### --search-attribute _Required modifier; can be specified more than once_ Specify a custom Search Attribute in the form "_name_=_type_". Valid values for _type_ are as follows: - Bool - Datetime - Double - Int - Keyword - Text Alias: `--sa` **Example** ```bash tcld namespace search-attributes add \ --search-attribute "YourSearchAttribute1=Text" \ --search-attribute "YourSearchAttribute2=Double" ``` ### rename The `tcld namespace search-attributes rename` command renames a custom [Search Attribute](/search-attribute) in Temporal Cloud. `tcld namespace search-attributes rename --existing-name --new-name ` The following modifiers control the behavior of the command. #### --namespace Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace search-attributes rename \ --namespace \ --existing-name \ --new-name ``` #### --request-id Specify a request identifier to use for the asynchronous operation. If not specified, the server assigns a request identifier. Alias: `-r` **Example** ```bash tcld namespace search-attributes rename \ --request-id \ --existing-name \ --new-name ``` #### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` **Example** ```bash tcld namespace search-attributes rename \ --resource-version \ --existing-name \ --new-name ``` #### --existing-name _Required modifier_ Specify the name of an existing Search Attribute. Alias: `--en` **Example** ```bash tcld namespace search-attributes rename \ --existing-name \ --new-name ``` #### --new-name _Required modifier_ Specify a new name for the Search Attribute. Alias: `--nn` **Example** ```bash tcld namespace search-attributes rename \ --existing-name \ --new-name ``` ## retention The `tcld namespace retention` commands manage the length of time (in days) a closed Workflow is preserved before deletion for a given Namespace in Temporal Cloud. Alias: `r` - [tcld namespace retention get](#get) - [tcld namespace retention set](#set) ### get Retrieve the length of time (in days) a closed Workflow will be preserved before deletion for the specified Namespace. Alias: `g` The following modifier controls the behavior of the command. #### --namespace _Required modifier_ Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace retention get \ --namespace ``` ### set Set the length of time (in days) a closed Workflow will be preserved before deletion for the specified Namespace. Alias: `s` The following modifiers control the behavior of the command. #### --namespace _Required modifier_ Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` #### --retention-days _Required modifier_ Specify the number of days a closed Workflow will be preserved before deletion. Alias: `--rd` **Example** ```bash tcld namespace retention set \ --namespace \ --retention-days ``` ## update-codec-server The `tcld namespace update-codec-server` command updates the configuration of a codec server for Temporal Cloud, which allows payloads to be decodec through a remote endpoint. Alias: `ucs` The following modifiers control the behavior of the command. #### --namespace _Required modifier._ Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` **Example** ```bash tcld namespace update-codec-server \ --namespace \ --endpoint ``` #### --endpoint _Required modifier._ Specify an endpoint to decode payloads for all users interacting with this Namespace. Endpoints must be valid https URLs. Alias: `-e` **Example** ```bash tcld namespace update-codec-server \ --namespace \ --endpoint ``` #### --pass-access-token Enables a user access token to be passed with the remote endpoint. This is set to `false` by default. Alias: `--pat` **Example** ```bash tcld namespace update-codec-server \ --namespace \ --endpoint \ --pass-access-token ``` #### --include-credentials Enables the inclusion of cross-origin credentials. This is set to `false` by default. Alias: `--ic` **Example** ```bash tcld namespace update-codec-server \ --namespace \ --endpoint \ --include-credentials true ``` ## update-high-availability {#update-high-availability} The `tcld namespace update-high-availability` command enables you to adjust settings for your [Namespace](/namespaces) with [High Availability features](/cloud/high-availability). This is set to `false` by default. Alias: `uha` The following modifiers control the behavior of the command. #### --namespace _Required modifier._ Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` #### --disable-auto-failover Specify whether Temporal Cloud should perform and trigger automatic failovers. Pass `true` or `false` (default). **Example** ``` tcld namespace update-high-availability \ --namespace \ --disable-auto-failover=true ``` When using API key authentication, add your API credentials before pressing Enter: ``` tcld --api-key \ namespace update-high-availability \ --namespace \ --disable-auto-failover=true ``` Alias: `-daf` ## tags The `tcld namespace tags` commands manage [Tags](/cloud/namespaces#tag-a-namespace) of the specified [Namespace](/namespaces) in Temporal Cloud. Alias: `t` - [tcld namespace tags upsert](#upsert) - [tcld namespace tags remove](#remove) ### upsert Add new tags or update existing tag values for the specified Namespace. Alias: `u` The following modifier controls the behavior of the command. #### --namespace _Required modifier_ Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` #### --request-id The request identifier to use for the asynchronous operation. If not set, the server assigns an identifier. Alias: `-r` #### --tag _Required modifier; can be specified more than once_ A tag in the form "_key_=_value_". [Tag structure and limits](/cloud/namespaces#tag-structure-and-limits). Alias: `--t` **Example** ```bash tcld namespace tags upsert \ --namespace \ --tag "key1=value1" \ --tag "key2=updated" ``` ### remove Remove existing tags for the specified Namespace using the key. Alias: `rm` The following modifiers control the behavior of the command. #### --namespace _Required modifier_ Specify a Namespace hosted on Temporal Cloud. If not specified, the value of the environment variable $TEMPORAL_CLOUD_NAMESPACE is used. Alias: `-n` #### --request-id The request identifier to use for the asynchronous operation. If not set, the server assigns an identifier. Alias: `-r` #### --tag-key _Required modifier; can be specified more than once_ A tag key string. [Tag Key structure and limits](/cloud/namespaces#tag-structure-and-limits). Alias: `--tk` **Example** ```bash tcld namespace tags remove \ --namespace \ --tag-key "key1" \ --tag-key "key2" ``` ## set-connectivity-rules The `tcld namespace set-connectivity-rules` command enables you to set connectivity rules on your [Namespace](/namespaces). Alias: `scrs` #### --connectivity-rule-ids The list of connectivity rule IDs, can be used in create namespace and update namespace. example: --ids id1 --ids id2 --ids id3. Alias: `ids` #### --namespace The namespace hosted on temporal cloud. Alias: `n` #### --remove-all Acknowledge that all connectivity rules will be removed, enabling connectivity from any source. --- ## tcld nexus command reference The `tcld nexus` commands manage Nexus resources in Temporal Cloud. Alias: `nxs` - [tcld nexus endpoint](#endpoint) ## endpoint The `tcld nexus endpoint` commands manage Nexus Endpoints in Temporal Cloud. Alias: `ep` - [tcld nexus endpoint allowed-namespace](#allowed-namespace) - [tcld nexus endpoint create](#create) - [tcld nexus endpoint delete](#delete) - [tcld nexus endpoint get](#get) - [tcld nexus endpoint list](#list) - [tcld nexus endpoint update](#update) ### allowed-namespace The `tcld nexus endpoint allowed-namespace` commands manage the allowed namespaces for a Nexus Endpoint. Alias: `an` - [tcld nexus endpoint allowed-namespace add](#add) - [tcld nexus endpoint allowed-namespace list](#list) - [tcld nexus endpoint allowed-namespace remove](#remove) - [tcld nexus endpoint allowed-namespace set](#set) #### add The `tcld nexus endpoint allowed-namespace add` command adds allowed namespaces to a Nexus Endpoint. Alias: `a` ##### --name Endpoint name. Alias: `n` ##### --namespace Namespace that is allowed to call this endpoint. Alias: `ns` ##### --request-id The request-id to use for the asynchronous operation, if not set the server will assign one (optional). Alias: `r` ##### --resource-version The resource-version (etag) to update from, if not set the cli will use the latest (optional). Alias: `v` #### list The `tcld nexus endpoint allowed-namespace list` command lists the allowed namespaces of a Nexus Endpoint. Alias: `l` ##### --name Endpoint name. Alias: `n` #### remove The `tcld nexus endpoint allowed-namespace remove` command removes allowed namespaces from a Nexus Endpoint. Alias: `r` ##### --name Endpoint name. Alias: `n` ##### --namespace Namespace that is allowed to call this endpoint. Alias: `ns` ##### --request-id The request-id to use for the asynchronous operation, if not set the server will assign one (optional). Alias: `r` ##### --resource-version The resource-version (etag) to update from, if not set the cli will use the latest (optional). Alias: `v` #### set The `tcld nexus endpoint allowed-namespace set` command sets the allowed namespaces of a Nexus Endpoint. Alias: `s` ##### --name Endpoint name. Alias: `n` ##### --namespace Namespace that is allowed to call this endpoint. Alias: `ns` ##### --request-id The request-id to use for the asynchronous operation, if not set the server will assign one (optional). Alias: `r` ##### --resource-version The resource-version (etag) to update from, if not set the cli will use the latest (optional). Alias: `v` ### create The `tcld nexus endpoint create` command creates a new Nexus Endpoint on the Cloud Account. An endpoint name is used by in workflow code to invoke Nexus operations. The endpoint target is a worker and `--target-namespace` and `--target-task-queue` must both be provided. This will fail if an endpoint with the same name is already registered. Alias: `c` #### --allow-namespace Namespace that is allowed to call this endpoint (optional). Alias: `ans` #### --description Endpoint description in markdown format (optional). Alias: `d` #### --description-file Endpoint description file in markdown format (optional). Alias: `df` #### --name Endpoint name. Alias: `n` #### --request-id The request-id to use for the asynchronous operation, if not set the server will assign one (optional). Alias: `r` #### --target-namespace Namespace in which a handler worker will be polling for Nexus tasks on. Alias: `tns` #### --target-task-queue Task Queue in which a handler worker will be polling for Nexus tasks on. Alias: `ttq` ### delete The `tcld nexus endpoint delete` command deletes a Nexus Endpoint on the Cloud Account. . Alias: `d` #### --name Endpoint name. Alias: `n` #### --request-id The request-id to use for the asynchronous operation, if not set the server will assign one (optional). Alias: `r` #### --resource-version The resource-version (etag) to update from, if not set the cli will use the latest (optional). Alias: `v` ### get The `tcld nexus endpoint get` command gets a Nexus Endpoint configuration by name from the Cloud Account. Alias: `g` #### --name Endpoint name. Alias: `n` ### list The `tcld nexus endpoint list` command lists all Nexus Endpoint configurations on the Cloud Account. Alias: `l` ### update The `tcld nexus endpoint update` command updates an existing Nexus Endpoint on the Cloud Account. An endpoint name is used by in workflow code to invoke Nexus operations. The endpoint target is a worker and `--target-namespace` and `--target-task-queue` must both be provided. The endpoint is patched leaving any existing fields for which flags are not provided as they were. Alias: `u` #### --description Endpoint description in markdown format (optional). Alias: `d` #### --description-file Endpoint description file in markdown format (optional). Alias: `df` #### --name Endpoint name. Alias: `n` #### --request-id The request-id to use for the asynchronous operation, if not set the server will assign one (optional). Alias: `r` #### --resource-version The resource-version (etag) to update from, if not set the cli will use the latest (optional). Alias: `v` #### --target-namespace Namespace in which a handler worker will be polling for Nexus tasks on (optional). Alias: `tns` #### --target-task-queue Task Queue in which a handler worker will be polling for Nexus tasks on (optional). Alias: `ttq` #### --unset-description Unset endpoint description. --- ## tcld request command reference The `tcld request` commands manage asynchronous requests in Temporal Cloud. Alias: `r` - [tcld request get](#get) ## get The `tcld request get` command gets the status of the specified request in Temporal Cloud. `tcld request get --request-id ` Alias: `g` The following modifiers control the behavior of the command. #### --request _Required modifier_ Specify a request identifier. Alias: `-r` **Example** ```bash tcld request get --request-id ``` --- ## tcld user group command reference The `tcld user-group` commands manage user groups in Temporal Cloud. Alias: `ug` - [tcld user-group add-users](#add-users) - [tcld user-group create](#create) - [tcld user-group delete](#delete) - [tcld user-group get](#get) - [tcld user-group list](#list) - [tcld user-group list-members](#list-members) - [tcld user-group remove-users](#remove-users) - [tcld user-group set-access](#set-access) ## add-users The `tcld user-group add-users` command adds users to the specified user group in Temporal Cloud. You must set `--group-id` to specify the group to add users to. Alias: `au` The following flags control the behavior of the command. #### --group-id (-id) Specify the ID of the group to add users to. #### --user-email (-e) Specify the email of the user to add. This flag can be specified multiple times to add multiple users in one command ## create Creates a user group. Alias: `c` The following flags control the behavior of the command. #### --display-name The display name of the group. #### --account-role The account role that the group should have. One of `admin`, `read`, `developer`, `owner`, `financeadmin`, `none`. #### --namespace-role (-nr) Specifies a namespace role that the group should have. Can be repeated multiple times to add multiple namespace roles to the group. Value is the form of `-` where the namspace ID is the full ID of the namespace and role is one of `admin`, `read`, or `write`. Example: `mynamespace.abc123-read` adds the read role for the `mynamespace.abc123` namespace. ## delete Deletes the user group. Alias: `d` The following flags control the behavior of the command. #### --group-id (-id) Specify the ID of the group to delete. ## get Gets the user group details. Alias: `g` The following flags control the behavior of the command. #### --group-id (-id) Specify the ID of the group to list. ## list List the user groups in your Temporal Cloud account. Alias: `l` The following flags control the behavior of the command. #### --page-size (-s) The number of groups to list per page. Defaults to 10. #### --page-token (-p) The page token used when paginating through result pages. ## list-members Lists all of the members of a group. Alias: `lm` The following flags control the behavior of the command. #### --group-id (-id) Specify the ID of the group to list. ## remove-users Removes one or more users as members of the group. Alias: `ru` The following flags control the behavior of the command. #### --group-id (-id) Specify the ID of the group to remove users from. #### --user-email (-e) The email address of the user to remove from the group. This flag can be specified multiple times in order to remove multiple users with one command. ## set-access This command sets the access roles that for a group. It follows the same conventions as the [create](#create) command by specifying an optional account role and 0 or more namespace roles. Alias: `sa` #### --group-id (-id) Specify the ID of the group to set access. #### --account-role The account role that the group should have. One of `admin`, `read`, `developer`, `owner`, `financeadmin`, `none`. #### --namespace-role (-nr) Specifies a namespace role that the group should have. Can be repeated multiple times to add multiple namespace roles to the group. Value is the form of `-` where the namspace ID is the full ID of the namespace and role is one of `admin`, `read`, or `write`. Example: `mynamespace.abc123-read` adds the read role for the `mynamespace.abc123` namespace. #### --append (-a) Will append namespace roles instead of replacing all existing roles already assigned. This allows namespace roles to be added without knowing what roles are already assigned to the group. #### --remove (-r) Will remove the given namespace roles instead of replacing all existing roles already assigned. This allows namespace roles to be removed without knowing what roles are already assigned to the group. --- ## tcld user command reference The `tcld user` commands manage users in Temporal Cloud. Alias: `u` - [tcld user delete](#delete) - [tcld user get](#get) - [tcld user invite](#invite) - [tcld user list](#list) - [tcld user resend-invite](#resend-invite) - [tcld user set-account-role](#set-account-role) - [tcld user set-namespace-permissions](#set-namespace-permissions) ## delete The `tcld user delete` command deletes the specified user in Temporal Cloud. You must set either `--user-email` or `--user-id` to specify the user to be deleted. Alias: `d` The following modifiers control the behavior of the command. #### --user-email Specify the email address of the user to delete. **Example** ```command tcld user delete --user-email ``` #### --user-id Specify the user identifier of the user to delete. **Example** ```command tcld user delete --user-id ``` #### --request-id The request identifier to use for the asynchronous operation. If not set, the server assigns an identifier. Alias: `-r` #### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` ## get The `tcld user get` command gets information about the specified user in Temporal Cloud. You must set either `--user-email` or `--user-id`. Alias: `g` The following modifiers control the behavior of the command. #### --user-email Specify the email address of the user to get information about. **Example** ```command tcld user delete --user-email ``` #### --user-id Specify the user identifier of the user to get information about. **Example** ```command tcld user delete --user-id ``` ## invite The `tcld namespace invite` command invites the specified user to join Temporal Cloud. Alias: `i` The following modifiers control the behavior of the command. #### --user-email _Required modifier_ Specify the email address of the user to be invited. You can supply this modifier multiple times to invite multiple users in a single request. Alias: `-e` #### --account-role _Required modifier_ Specify the [account-level Role](/cloud/users#account-level-roles) for the invited user. Available account roles: `admin` | `developer` | `read`. Alias: `--ar` #### --namespace-permission Specify the [Namespace-level permissions](/cloud/users#namespace-level-permissions) for the invited user. You can supply this modifier multiple times to set multiple Namespace permissions in a single request. Each value must be in the format of `namespace=permission-type`. Available namespace permissions: `Admin` | `Write` | `Read`. Alias: `-p` #### --request-id The request identifier to use for the asynchronous operation. If not set, the server assigns an identifier. Alias: `-r` ```command tcld user invite --user-email --account-role developer --namespace-permission ns1=Admin --namespace-permission ns2=Write --request-id <123456> ``` ## list The `tcld user list` command returns a paginated list of users in Temporal Cloud. Alias: `l` **Example** ```command tcld user list ``` The following modifiers control the behavior of the command. #### --namespace List users that have permissions to the Namespace. Alias: `-n` **Example** ```command tcld user list --namespace ``` #### --page-token Page token for paging list users request. Alias: `-p` #### --page-size Page size for paging list users request. Defaults to 10. Alias: `-s` ## resend-invite The `tcld user resend-invite` command resends an invitation to the specified user in Temporal Cloud. You must set either `--user-email` or `--user-id` to specify the user to receive another invitation. Alias: `ri` The following modifiers control the behavior of the command. #### --user-email Specify the email address of the user to resend an invitation to. **Example** ```bash tcld user resend-invite --user-email ``` #### --user-id Specify the user identifier of the user to resend an invitation to. **Example** ```bash tcld user resend-invite --user-id ``` #### --request-id The request identifier to use for the asynchronous operation. If not set, the server assigns an identifier. Alias: `-r` ## set-account-role The `tcld user set-account-role` command sets an [account-level Role](/cloud/users#account-level-roles) for the specified user in Temporal Cloud. You must set either `--user-email` or `--user-id`. Alias: `ri` The following modifiers control the behavior of the command. #### --account-role _Required modifier_ Specify the account-level Role to assign to the user. Available account roles: `admin` | `developer` | `read`. Alias: `-ar` #### --user-email Specify the email address of the user to assign an account-level Role to. Alias: `-e` **Example** ```command tcld user set-account-role --user-email --account-role Developer ``` #### --user-id Specify the user identifier of the user to assign an account-level Role to. Alias: `--id` **Example** ```command tcld user set-account-role --user-id --account-role Developer ``` #### --request-id The request identifier to use for the asynchronous operation. If not set, the server assigns an identifier. Alias: `-r` #### --resource-version Specify a resource version (ETag) to update from. If not specified, the latest version is used. Alias: `-v` ## set-namespace-permissions The `tcld user set-namespace-permissions` command sets [Namespace-level permissions](/cloud/users#namespace-level-permissions) for a specified user in Temporal Cloud. You must set either `--user-email` or `--user-id`. Alias: `snp` The following modifiers control the behavior of the command. #### --user-email Specify the email address of the user to assign Namespace-level permissions to. **Example** ```command tcld user set-namespace-permissions --user-email ``` #### --user-id Specify the user identifier of the user to assign Namespace-level permissions to. **Example** ```command tcld user set-namespace-permissions --user-id ``` #### --request-id The request identifier to use to assign Namespace-level permissions to. If not set, the server assigns an identifier. Alias: `-r` #### --resource-version Specify a resource version (ETag) to assign Namespace-level permissions to. If not specified, the latest version is used. Alias: `-v` #### --namespace-permission Specify the [Namespace-level permissions](/cloud/users#namespace-level-permissions) for the invited user. You can supply this modifier multiple times to set multiple Namespace permissions in a single request. Each value must be in the format of `namespace=permission-type`. Available namespace permissions: `Admin` | `Write` | `Read`. Alias: `-p` --- ## tcld version command reference The `tcld version` command gets version information about tcld. Alias: `v` `tcld version` The command has no modifiers. --- ## Temporal Cloud Terraform provider The Terraform Temporal Cloud provider allows you to use Terraform to manage resources for Temporal Cloud. The Terraform tool manages infrastructure as code (IaC). With this provider, you can use Terraform to automate Temporal Cloud resource management, including Namespaces, Users, Service Accounts, API Keys and more. :::note Terraform Management Once a resource is managed by Terraform, you should only use Terraform to manage that resource. ::: Resources: - The [Temporal Cloud Terraform provider](https://registry.terraform.io/providers/temporalio/temporalcloud/latest) is available in the Terraform Registry, where you can find detailed documentation on the Provider's supported resources and data sources. - The GitHub repository for the Terraform provider is [terraform-provider-temporalcloud](https://github.com/temporalio/terraform-provider-temporalcloud/tree/main), where you can report bugs, provide feature requests, and [contribute](https://github.com/temporalio/terraform-provider-temporalcloud/blob/main/CONTRIBUTING.md) to the provider. We encourage your input as we develop the provider with the community. - To view the list of available Temporal Cloud resources supported by Terraform provider, visit the resources section of the Terraform documentation in Hashi's [registry](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs). ### Prerequisites To use the Terraform provider, you'll need the following: - The [Terraform CLI](https://developer.hashicorp.com/terraform/cli) - An [API Key](/cloud/api-keys): an API Key is required to use the Terraform provider. - See [the API docs](https://docs.temporal.io/cloud/api-keys#generate-an-api-key) for instructions on generating an API Key. :::note OpenTofu Registry Our Terraform Provider is registered with [OpenTofu](https://opentofu.org), but that registration is not maintained or managed by Temporal Technologies. ::: ## Setup Generate an [API Key](https://docs.temporal.io/cloud/api-keys#generate-an-api-key) to authenticate Terraform operations with your Temporal Cloud account or a Service Account. Then, either use an environment variable or pass the API Key into the provider manually to manage your Temporal Cloud Terraform resources. Follow these examples to use an environment variable to pass in your API Key to the provider. Export your environment variable for secure access to the API Keys. ```bash --- # replace with the "secretKey": output from tcld apikey create command export TEMPORAL_CLOUD_API_KEY= ``` :::tip ENVIRONMENT VARIABLES Do not confuse environment variables, set with your shell, with temporal env options. ::: Export your environment variable for secure access to the API Keys. ```bash --- # replace with the "secretKey": output from tcld apikey create command set TEMPORAL_CLOUD_API_KEY= ``` :::tip ENVIRONMENT VARIABLES Do not confuse environment variables, set with your shell, with temporal env options. ::: Or, pass it in manually in your .tf file using the provider code block ```yml provider "temporalcloud" { api_key = "my-temporalcloud-api-key" } ``` ## Manage Temporal Cloud Namespaces with Terraform Terraform is a great way to automate the management of Temporal Namespaces. It doesn't matter whether you want management to be centralized within a platform team or federated to different product teams. The provider allows you to import, create, update, and delete Namespaces with Terraform. You must use an Identity with Temporal Cloud Namespace management privileges. This includes the Account Owner, Global Admin, or Developer Account Role. For more detailed examples on how to manage Namespaces via Terraform, check the [Terraform Registry documention for Namspaces](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/namespace). **How do I create a Namespace with Terraform?** 1. Create a Terraform configuration file (`terraform.tf`) to define a Namespace. ```hcl terraform { required_providers { temporalcloud = { source = "temporalio/temporalcloud" } } } provider "temporalcloud" { } resource "temporalcloud_namespace" "namespace" { name = "terraform" regions = ["aws-us-east-1"] accepted_client_ca = base64encode(file("ca.pem")) retention_days = 14 } ``` In this example, you create a Temporal Cloud Namespace named `terraform`, specifying the AWS region `aws-us-east-1`, and specifying the path to the CA certificate. 1. Initialize the Terraform provider. Run the following command to initialize the Terraform provider. ```bash terraform init ``` 1. Apply the Terraform configuration. Once initialization occurs, apply the Terraform configuration to your Temporal Cloud account. ```bash terraform apply ``` Follow the onscreen prompts. Upon completion, you'll see a success message indicating your Namespace is created. ```bash temporalcloud_namespace.terraform: Creation complete after 2m17s [id=] ``` You can find more examples of Namespace management in the Terraform Provider docs located on HashiCorp's [Terraform Registry](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/namespace). The Terraform Provider docs show how to generate CA certs within Terraform configuration files and create a Namespace with API Key based authentication. **How do I validate the creation of the Namespace?** You can validate the creation of the Namespace through the Temporal Web UI or through the `tcld namespace get` command. **Using the Temporal Web UI** 1. Log into the Temporal Cloud Web UI. 1. Navigate to the Namespaces page. 1. Search for the Namespace you created. **Using the tcld CLI utility** Validate the creation of your Namespace through the Terraform provider. To validate see your Namespace in the Cloud UI or through the `tcld namespace get` command. Run the `tcld namespace get` command and pass in your [Cloud Namespace Name](/cloud/namespaces#temporal-cloud-namespace-name) and [Cloud Account Id](/cloud/namespaces#temporal-cloud-account-id): ```bash tcld namespace get -n "." ``` **How do I update a Temporal Cloud Namespace?** Terraform automatically recognizes changes made within `.tf` files and applies those changes to Temporal. For example, change the retention period setting in the Terraform file from the previous example and watch Terraform apply the change without any additional steps required by you. 1. Set the retention period to 30 days. ```hcl terraform { required_providers { temporalcloud = { source = "temporalio/temporalcloud" version = ">= 0.0.6" } } } provider "temporalcloud" { } resource "temporalcloud_namespace" "namespace" { name = "terraform" regions = ["aws-us-east-1"] accepted_client_ca = base64encode(file("ca.pem")) retention_days = 30 } ``` 1. Apply your configuration. When prompted, answer yes to continue: ```command terraform apply ``` Upon completion, you will see a success message indicating your Namespace has been updated. It may take several minutes to update a Namespace. ```text temporalcloud_namespace.namespace: Modifications complete after 10s [id=terraform.a1bb2] ``` **How do I delete a Temporal Cloud Namespace?** To delete a Namespace, remove the `temporalcloud_namespace` resource and all dependent resource configurations from your Terraform files and run the `terraform apply` command. Upon completion, you will see a success message indicating the resource has been destroyed: ```text temporalcloud_namespace.my_namespace: Destruction complete after 3s Apply complete! Resources: 0 added, 0 changed, 1 destroyed. ``` :::note Preventing Deletion You can prevent deletion of any Terraform resource by including the `prevent_destroy` argument in the Terraform configuration file. ::: **How do I import a Temporal Cloud Namespace?** If you have an existing Namespace in Temporal Cloud, you can import it into Terraform to manage the Namespace from Terraform using the `terraform import` command. 1. Provide a configuration placeholder in your Terraform configuration. ```yml resource "temporalcloud_namespace" "namespace" { } ``` 1. Run the `terraform import` command from the command line and pass in the Namespace ID. Your Namespace ID is available at the top of the Namespace's page in the Temporal Cloud UI and is in the format `namespaceid.acctid`. ```bash terraform import temporalcloud_namespace.terraform namespaceid.acctid ``` The Namespace is now a part of the Terraform state and all changes to the Namespace should be managed by Terraform. :::caution Once a resource has been imported into Terraform, outside changes to the resource will create Terraform "drift" errors on subsequent Terraform operations. ::: ## Manage Temporal Cloud Nexus Endpoints with Terraform Terraform provides a great way to automate the management of [Nexus Endpoints](/nexus/endpoints). The provider allows you to import, create, update, and delete Nexus Endpoints with Terraform. You must use an Identity with [Developer role (or higher)](/cloud/users#account-level-roles) and [Namespace Admin permission](/cloud/users#namespace-level-permissions) on the Endpoint's target Namespace. **How do I create a Nexus Endpoint with Terraform?** 1. Create a Terraform configuration file (`terraform.tf`) to define a Nexus Endpoint. From the [example in the Terraform Registry](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/nexus_endpoint): ```yml terraform { required_providers { temporalcloud = { source = "temporalio/temporalcloud" } } } provider "temporalcloud" { } resource "temporalcloud_namespace" "target_namespace" { name = "terraform-target-namespace" regions = ["aws-us-west-2"] api_key_auth = true retention_days = 14 timeouts { create = "10m" delete = "10m" } } resource "temporalcloud_namespace" "caller_namespace" { name = "terraform-caller-namespace" regions = ["aws-us-east-1"] api_key_auth = true retention_days = 14 timeouts { create = "10m" delete = "10m" } } resource "temporalcloud_namespace" "caller_namespace_2" { name = "terraform-caller-namespace-2" regions = ["gcp-us-central1"] api_key_auth = true retention_days = 14 timeouts { create = "10m" delete = "10m" } } resource "temporalcloud_nexus_endpoint" "nexus_endpoint" { name = "terraform-nexus-endpoint" description = <<-EOT Service Name: my-hello-service Operation Names: echo say-hello Input / Output arguments are in the following repository: https://github.com/temporalio/samples-go/blob/main/nexus/service/api.go EOT worker_target = { namespace_id = temporalcloud_namespace.target_namespace.id task_queue = "terraform-task-queue" } allowed_caller_namespaces = [ temporalcloud_namespace.caller_namespace.id, temporalcloud_namespace.caller_namespace_2.id, ] } ``` In this example, 3 Namespaces are created: - target Namespace for a Nexus Endpoint - Nexus requests will be routed to a Worker that polls the target Namespace. - caller Namespace(s) - Nexus Operations are invoked from caller Namespace, for example from a caller Workflow. These Namespaces are referenced in the [Nexus Endpoint](/nexus/endpoints) configuration: - `worker_target` (Namespace and Task Queue) - currently only a single worker_target is supported. - `allowed_caller_namespaces` - used to enforce Nexus Endpoint [runtime access controls](/nexus/security#runtime-access-controls). 1. Initialize the Terraform provider. Run the following command to initialize the Terraform provider. ```bash terraform init ``` 1. Apply the Terraform configuration. Once initialization occurs, apply the Terraform configuration to your Temporal Cloud account. ```bash terraform apply ``` Follow the onscreen prompts. Upon completion, you'll see a success message indicating 3 Namespaces and a Nexus Endpoint are created. ```bash temporalcloud_nexus_endpoint.nexus_endpoint: Creation complete after 2s [id=b158063be978471fa1d200569b03834d] ``` You can find more examples of Nexus Endpoint management in the Terraform Provider docs located on HashiCorp's [Terraform Registry](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/nexus_endpoint). The Terraform Provider docs show how to generate CA certs within Terraform configuration files and create a Namespace with API Key based authentication. **How do I validate the creation of the Nexus Endpoint?** You can validate the creation of the Nexus Endpoint through the Temporal Web UI or through the `tcld nexus endpoint get` command. **Using the Temporal Web UI** 1. Log into the Temporal Cloud Web UI. 1. Navigate to [the Nexus page](https://cloud.temporal.io/nexus). 1. Search for the Nexus Endpoint you created, using only the Nexus Endpoint Name (without an account suffix). **Using the tcld CLI utility** Validate the creation of your Nexus Endpoint through the Terraform provider. To validate see your Nexus Endpoint in the Cloud UI or through the `tcld nexus endpoint get` command. Run the below command using your Nexus Endpoint Name. Do not use the account ID suffix with this endpoint name: ```bash tcld nexus endpoint get -n "" ``` **How do I update a Nexus Endpoint?** Terraform automatically recognizes changes made within `.tf` files and applies those changes to Temporal. For example, to change the allowed caller Namespaces on a Nexus Endpoint: 1. Add or remove allowed caller Namespaces by updating the Nexus Endpoint configuration, for example by removing `caller_namespace_2` from the configuration above: ```yml resource "temporalcloud_nexus_endpoint" "nexus_endpoint" { name = "terraform-nexus-endpoint" description = <<-EOT Service Name: my-hello-service Operation Names: echo say-hello Input / Output arguments are in the following repository: https://github.com/temporalio/samples-go/blob/main/nexus/service/api.go EOT worker_target = { namespace_id = temporalcloud_namespace.target_namespace.id task_queue = "terraform-task-queue" } allowed_caller_namespaces = [ temporalcloud_namespace.caller_namespace.id ] } ``` 1. Apply your configuration. When prompted, answer yes to continue: ```command terraform apply ``` Upon completion, you will see a success message indicating your Nexus Endpoint has been updated. It may take several seconds to update a Nexus Endpoint in the control plane which is visibile from the Temporal UI or tcld CLI. Propagation of Nexus Endpoint changes to the data plane may take longer, but usually complete in less than one minute. ```text temporalcloud_nexus_endpoint.nexus_endpoint: Modifications complete after 1s [id=b158063be978471fa1d200569b03834d] ``` **How do I delete a Nexus Endpoint?** To delete a Nexus Endpoint, remove the `temporalcloud_nexus_endpoint` resource configuration from your Terraform files and run the `terraform apply` command. Upon completion, you will see a success message indicating the resource has been destroyed: ```text temporalcloud_nexus_endpoint.my_nexus_endpoint: Destruction complete after 3s Apply complete! Resources: 0 added, 0 changed, 1 destroyed. ``` **How do I import a Temporal Cloud Nexus Endpoint?** If you have an existing Nexus Endpoint in Temporal Cloud, you can import it into Terraform to manage the Nexus Endpoint from Terraform using the `terraform import` command. 1. Initialize the Terraform provider in a new directory. Run the following command to initialize the Terraform provider. ```bash terraform init ``` 1. Provide a configuration placeholder in your Terraform configuration and ensure you've included your [API key](#setup). ```yml terraform { required_providers { temporalcloud = { source = "temporalio/temporalcloud" } } } provider "temporalcloud" { } resource "temporalcloud_nexus_endpoint" "nexus_endpoint" { } ``` 1. Run the `terraform import` command from the command line and pass in the Nexus Endpoint ID. ```bash terraform import temporalcloud_nexus_endpoint ``` Your Nexus Endpoint ID is available at the top of the Nexus Endpoint's page in the [Temporal Cloud UI](https://cloud.temporal.io/nexus). Upon completion, you will see a success message indicating the Nexus Endpoint was imported. ```text temporalcloud_nexus_endpoint.nexus_endpoint: Refreshing state... [id=3c0c75ccfa8144b092c13ce632463761] Import successful! ``` The Nexus Endpoint is now a part of the Terraform state and all changes to the Nexus Endpoint should be managed by Terraform. :::caution Once a resource has been imported into Terraform, outside changes to the resource will create Terraform "drift" errors on subsequent Terraform operations. ::: ## Manage Temporal Cloud Users with Terraform Manage Temporal Cloud Users with the same process you use to manage Namespaces with Terraform. The following examples create, update, delete, and import Temporal Cloud Users with `terraform apply` commands on the Terraform configuration file. :::note User Management Cautions about Temporal User management: - Terraform can't manage the Temporal Account Owner role. While you can import an Account Owner to Terraform, you cannot create, update, or delete an Account Owner with Terraform. - Right now, you can't manage a user's access to a Namespace from the Namespace resource. You must manage Namespace access from the User resource. This is also true for Service Accounts. - Account Owners and Global Admins automatically gain access to all Namespaces in Temporal. Therefore, you cannot specify Namespace access for these roles. This is also true for Service Accounts. - Follow Terraform best practices for resource management. Manage a specific user in one and only one .tf file. There's a risk that you may overwrite a user's permissions if you don't. - To Import a user, you'll need the User's ID which is currently not available in the Temporal Cloud UI. You can fetch current User ID by running the `tcld user list` command. ::: For more detailed examples on how to manage Namespaces via Terraform, check the Terraform Registry documention for [provisioning a Temporal Cloud user](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/user). **How do I create a Temporal Cloud User with Terraform?** 1. Add a Terraform User resources configuration to your Terraform file. ```hcl terraform { required_providers { temporalcloud = { source = "temporalio/temporalcloud" } } } provider "temporalcloud" { } resource "temporalcloud_namespace" "namespace" { name = "terraform" regions = ["aws-us-east-1"] accepted_client_ca = base64encode(file("ca.pem")) retention_days = 14 } # Global admins automatically have access to all namespaces. resource "temporalcloud_user" "global_admin" { email = "admin@example.com" account_access = "Admin" } # Developers can be granted explicit namespace permissions. resource "temporalcloud_user" "namespace_admin" { email = "developer@example.com" account_access = "Developer" namespace_accesses = [{ namespace_id = temporalcloud_namespace.namespace.id permission = "Write" }] } ``` Replace the email and domain values with your Temporal Cloud User email and domain. 1. Apply your configuration. When prompted, answer yes to continue: ```command terraform apply ``` Upon completion, you will see a success message indicating your User has been created. ```text temporalcloud_user.namespace_admin: Creation complete after 1s [id=12a34bc5678910d38d9e8390636e7412] Apply complete! Resources: 2 added, 0 changed, 0 destroyed. ``` **How do I update a Temporal Cloud User with Terraform?** To update a User with Terraform, follow the same steps used to create a User. **How do I delete a Temporal Cloud User with Terraform?** To delete a User with Terraform, remove the Terraform User resources configuration from your Terraform file and run the `terraform apply` command. 1. Remove the Terraform User resources configuration from your Terraform file. ```hcl terraform { required_providers { temporalcloud = { source = "temporalio/temporalcloud" version = ">= 0.0.6" } } } provider "temporalcloud" { } resource "temporalcloud_namespace" "namespace" { name = "terraform" regions = ["aws-us-east-1"] accepted_client_ca = base64encode(file("ca.pem")) retention_days = 14 } # This user will be deleted after running `terraform apply` resource "temporalcloud_user" "global_admin" { email = "admin@example.com" account_access = "Admin" } # The following user resource has been removed (or commented out), # so Terraform will delete it. # resource "temporalcloud_user" "namespace_admin" { # email = "developer@example.com" # account_access = "Developer" # # namespace_accesses = [{ # namespace_id = temporalcloud_namespace.namespace.id # permission = "Write" # }] # } ``` 1. Run the `terraform apply` command. When prompted, answer yes to continue: ```command terraform apply ``` Upon completion, you will see a success message indicating your User has been deleted. ```text temporalcloud_user.namespace_admin: Destruction complete after 2s Apply complete! Resources: 0 added, 0 changed, 1 destroyed. ``` **How do I import a Temporal User?** If you have an existing User in Temporal Cloud, you can import it into Terraform using the `terraform import` command. 1. Provide a configuration placeholder in your Terraform configuration. ```yml resource "temporalcloud_user" "user" { } ``` 1. Run the `terraform import` command and pass in the User ID. Your User ID is available using the Temporal Cloud CLI `tcld u l` command. ```bash terraform import temporalcloud_user.user 72360058153949edb2f1d47019c1e85f ``` The User is now a part of the Terraform state and all changes to the User should be managed by Terraform. ## Manage Temporal Cloud Service Accounts with Terraform The process and steps to managing a Service Account with Terraform are very similar to managing a User with Terraform with a few small differences: - Service Accounts use the Service Account Terraform resource not the User resource. - Service Accounts do not have email addresses, they have names instead. This means you should specify a name for a Service Account instead of an email. Everything else about managing Services Accounts with Terraform follows the same process, guidance, and limitations of managing Users with Terraform. ## Manage Temporal Cloud API Keys with Terraform You can manage your own, personal API Keys and Service Account API Keys with Terraform. The process and steps to managing an API Key with Terraform are very similar to managing other resources with Terraform. You can create, delete, update and import API Keys with Terraform. One difference between working with API Keys as a Terraform resource compared to other Temporal Cloud resources is the need to access an API Keys secure token output from Terraform. Walk through the process of securely accessing the API Key Token in the Create section of this guide. :::note Limits and Best Practices - See the API Key [documentation](https://docs.temporal.io/cloud/api-keys) for information about the limits and best practices for managing API Keys. - See Terraform's documentation on working with [sensitive data](https://www.terraform.io/docs/language/values/variables.html#sensitive-values) for more information on how to manage sensitive data in Terraform. ::: **How do I create a Temporal Cloud API Key with Terraform?** From the example in the [Terraform Registry](https://registry.terraform.io/providers/temporalio/temporalcloud/latest/docs/resources/apikey): 1. Add a Terraform API Key resources configuration to your Terraform file. ```hcl terraform { required_providers { temporalcloud = { source = "temporalio/temporalcloud" } } } provider "temporalcloud" { } resource "temporalcloud_service_account" "global_service_account" { name = "admin" account_access = "Admin" } resource "temporalcloud_apikey" "global_apikey" { display_name = "admin" owner_type = "service-account" owner_id = temporalcloud_service_account.global_service_account.id expiry_time = "2024-11-01T00:00:00Z" disabled = false } ``` Make sure to: - Replace the display_name, expiry_time, and disabled values with your Temporal Cloud API Key configuration. - Replace the owner_type and owner_id values with your Temporal Cloud Service Account or other Identity information. 1. Create an output.tf file and add the following code to output the API Key Token. ```hcl output "apikey_token" { value = temporalcloud_apikey.global_apikey.token sensitive = true } ``` 1. Apply your configuration. When prompted, answer yes to continue: ```command terraform apply ``` Upon completion, you will see a success message indicating the API Key has been created. ```text temporalcloud_apikey.global_apikey: Creation complete after 1s [id=kayBf38JIWkMPmnfr59iEIaEk2L7uqR4] ``` 1. Access the API Key Token securely. You'll notice that if you view the state for the API Key resource, the token value is not displayed. ```bash terraform state show temporalcloud_apikey.global_apikey # temporalcloud_apikey.global_apikey: resource "temporalcloud_apikey" "global_apikey" { disabled = false display_name = "adminKey3" expiry_time = "2024-12-01T00:00:00Z" id = "kayBf38JIWkMPmnfr59iEIaEk2L7uqR4" owner_id = "b81336a6097449cba75c2e5500df3d31" owner_type = "service-account" state = "active" token = (sensitive value) } ``` To access the token, you can use the Terraform output command. ```bash terraform output -json apikey_token ``` This will display the token value in the terminal. :::info Security and API Keys Remember, keep your Terraform state files secure if you're managing API Keys with Terraform. The state file contains sensitive information, like the API Key Token, that should not be shared or exposed. ::: **How do I update a Temporal Cloud API Key with Terraform?** To update an API Key with Terraform, follow the same steps used to create an API Key. :::note Editing Fields You can only edit an API Key's name or description field. Updating an API Key does not generate a new secure token ::: **How do I delete a Temporal Cloud API Key with Terraform?** To delete an API Key with Terraform, remove the Terraform API Key resources configuration from your Terraform and output.tf files and run the `terraform apply` command. **How do I Import a Temporal API Key?** You cannot import an API Key into Terraform. Once created, the API Key secret isn't stored and can't be retrieved, so you can't access it using import. Instead, Temporal recommends creating a new API Key using Terraform directly. ## Data Sources - Regions and Namespaces The Terraform provider also supports 2 data sources that provide you access to the available Regions and Namespaces in your Temporal Cloud account. :::note Terraform Data Sources See Terraform [documentation](https://developer.hashicorp.com/terraform/language/data-sources) to learn more about Terraform Data Sources ::: For example, to retrieve a list of regions available for your account, you can use the regions data_source ```hcl data "temporalcloud_regions" "regions" {} output "regions" { value = data.temporalcloud_regions.regions.regions } ``` ## Community Involvement Do you have feedback about the provider? Want to report a bug or request a feature? We'd love to hear from you. - Please reach out to us in the Temporal Community [Slack](https://join.slack.com/t/temporalio/shared_invite/zt-2u2ey8ilu-LRxnd3PSoAk9GZ94UuzoBA) in the #terraform channel - Feel free to create issues and contribute PRs in the Temporal Terraform [GitHub repository](https://github.com/temporalio/terraform-provider-temporalcloud/tree/main) --- ## Monitor worker health This page is a guide to monitoring a Temporal Worker fleet and covers the following scenarios: - [Configuring minimal observations](#minimal-observations) - [How to detect a backlog of Tasks](#detect-task-backlog) - [How to detect greedy Worker resources](#detect-greedy-workers) - [How to detect misconfigured Workers](#detect-misconfigured-workers) - [How to configure Sticky cache](#configure-sticky-cache) ## Minimal Observations {#minimal-observations} These alerts should be configured and understood first to gain intelligence into your application health and behaviors. 1. Create monitors and alerts for Schedule To Start latency SDK metrics (both [Workflow Executions](/references/sdk-metrics#workflow_task_schedule_to_start_latency) and [Activity Executions](/references/sdk-metrics#activity_schedule_to_start_latency)). See [Detect Task backlog section](#detect-task-backlog) to explore [sample queries](#prometheus-query-samples) and appropriate responses that accompany these values. - Alert at >200ms for your p99 value - Plot >100ms for your p95 value 2. Create a [Grafana](/cloud/metrics/prometheus-grafana) panel called Sync Match Rate. See the [Sync Match Rate section](#sync-match-rate) to explore example queries and appropriate responses that accompany these values. - Alert at \<95% for your p99 value - Plot \<99% for your p95 value 3. Create a [Grafana](/cloud/metrics/prometheus-grafana) panel called Poll Success Rate. See the [Detect greedy Workers section](#detect-greedy-workers) for example queries and appropriate responses that accompany these values. - Alert at \<90% for your p99 value - Plot \<95% for your p95 value The following alerts build on the above to dive deeper into specific potential causes for Worker related issues you might be experiencing. 1. Create monitors and alerts for the [temporal_worker_task_slots_available](/references/sdk-metrics#worker_task_slots_available) SDK metric. See the [Detect misconfigured Workers section](#detect-misconfigured-workers) for appropriate responses based on the value. - Alert at 0 for your p99 value 2. Create monitors for the [temporal_sticky_cache_size](/references/sdk-metrics#sticky_cache_size) SDK metric. See the [Configure Sticky Cache section](#configure-sticky-cache) for more details on this configuration. - Plot at \{value\} > \{WorkflowCacheSize.Value\} 3. Create monitors for the [temporal_sticky_cache_total_forced_eviction](/references/sdk-metrics#sticky_cache_total_forced_eviction) SDK metric. This metric is available in the Go SDK, and the Java SDK only. See the [Configure Sticky Cache section](#configure-sticky-cache) for more details and appropriate responses. - Alert at >\{predetermined_high_number\} ## Detect Task Backlog {#detect-task-backlog} ### Symptoms of high Task backlog If the Task backlog is too high, you will find that tasks are waiting to find Workers to run on. This can cause a delay in Workflow execution. Detecting a growing Task backlog is possible by watching the Schedule To Start latency and sync match rate. Metrics to monitor: - **SDK metric**: [workflow_task_schedule_to_start_latency](/references/sdk-metrics#workflow_task_schedule_to_start_latency) - **SDK metric**: [activity_schedule_to_start_latency](/references/sdk-metrics#activity_schedule_to_start_latency) - **Temporal Cloud metric**: [temporal_cloud_v0_poll_success_count](/cloud/metrics/reference#temporal_cloud_v0_poll_success_count) - **Temporal Cloud metric**: [temporal_cloud_v0_poll_success_sync_count](/cloud/metrics/reference#temporal_cloud_v0_poll_success_sync_count) ### Schedule To Start latency The Schedule To Start metric represents how long Tasks are staying unprocessed in the Task Queues. It is the time between when a Task is enqueued and when it is started by a Worker. This time being long (likely) means that your Workers can't keep up — either increase the number of Workers (if the host load is already high) or increase the number of pollers per Worker. If your Schedule To Start latency alert triggers or is high, check the [Sync Match Rate](#sync-match-rate) to decide if you need to adjust your Worker or fleet, or contact Temporal Cloud support. If your Sync Match Rate is low, contact [Temporal Cloud support](/cloud/support#support-ticket). If your Sync Match Rate is low, you can contact Temporal Cloud support. The schedule_to_start_latency SDK metric for both [Workflow Executions](/references/sdk-metrics#workflow_task_schedule_to_start_latency) and [Activity Executions](/references/sdk-metrics#activity_schedule_to_start_latency) should have alerts. #### Prometheus query samples **Workflow Task Latency, 99th percentile** ``` histogram_quantile(0.99, sum(rate(temporal_workflow_task_schedule_to_start_latency_seconds_bucket[5m])) by (le, namespace, task_queue)) ``` **Workflow Task Latency, average** ``` sum(increase(temporal_workflow_task_schedule_to_start_latency_seconds_sum[5m])) by (namespace, task_queue) / sum(increase(temporal_workflow_task_schedule_to_start_latency_seconds_count[5m])) by (namespace, task_queue) ``` **Activity Task Latency, 99th percentile** ``` histogram_quantile(0.99, sum(rate(temporal_activity_schedule_to_start_latency_seconds_bucket[5m])) by (le, namespace, task_queue)) ``` **Activity Task Latency, average** ``` sum(increase(temporal_activity_schedule_to_start_latency_seconds_sum[5m])) by (namespace, task_queue) / sum(increase(temporal_activity_schedule_to_start_latency_seconds_count[5m])) by (namespace, task_queue) ``` **Target** This latency should be very low, close to zero. Any higher value indicates a bottleneck. ### Sync Match Rate {#sync-match-rate} The sync match rate measures the rate of Tasks that are delivered to workers without having to be persisted (workers are up and available to pick them up) to the rate of all delivered tasks. A sync match is when a task is immediately matched to a Worker via the Sticky Queue. An async match is when a Task cannot be matched to the Sticky Queue for a Worker. This can happen when no Worker has cached the Workflow, or if the Task times out during processing. In this case, the Task returns to the general Task Queue. **Calculate Sync Match Rate** ``` temporal_cloud_v0_poll_success_sync_count ÷ temporal_cloud_v0_poll_success_count = N ``` #### Prometheus query samples **sync_match_rate query** ``` sum by(temporal_namespace) ( rate( temporal_cloud_v0_poll_success_sync_count{temporal_namespace=~"$namespace"}[5m] ) ) / sum by(temporal_namespace) ( rate( temporal_cloud_v0_poll_success_count{temporal_namespace=~"$namespace"}[5m] ) ) ``` **Target** The Sync Match Rate should be at least >95%, but preferably >99%. ### Handling Task backlog issues {#task-backlog-handling} Once you have detected the condition of a high Task backlog, consider the scenarios below to take action. #### High Schedule To Start latency and high sync match rate There are three typical causes for this: - There are not enough workers to perform work - Each worker is either under resourced, or is misconfigured, to handle enough work - There is congestion caused by the environment (eg., network) hosting the worker(s) and Temporal Cloud Consider - Increasing either the number of available workers - Verifying that your worker hosts are appropriately resourced - Increasing the worker configuration value for concurrent pollers for workers/task executions (if your worker resources can accommodate the increased load) - Doing some combination of these #### High Schedule To Start latency and low sync match rate Verify that you have not set a value for `ScheduleToStartTimeout` in your Activity Options. This may skew your observations. It may be acceptable for your use case to have low sync match rate. For example, if you have known workloads or you intentionally throttle tasks. In this case it's also important to understand the fill and drain rates of the async tasks are during these windows: Successful async polls ``` temporal_cloud_v0_poll_success_count - temporal_cloud_v0_poll_success_sync_count = N ``` ``` sum(rate(temporal_cloud_v0_poll_success_count{temporal_namespace=~"$temporal_namespace"}[5m])) by (temporal_namespace, task_type) - sum(rate(temporal_cloud_v0_poll_success_sync_count{temporal_namespace=~"$temporal_namespace"}[5m])) by (temporal_namespace, task_type) ``` [//]: # (add `temporal_cloud_v1_approximate_backlog_count` once the v2 metrics has been GA'd) **Actions** - Verify that your Worker setup is optimized for your instance: - Check the system CPU usage against `task_slots` and adjust `maxConcurrentWorkflowTaskExecutionSize` and `maxConcurrentActivityExecutionSize` settings as necessary. - Check the system memory usage against `sticky_cache_size` and adjust sticky cache size as necessary. - For a detailed explanation of settings, see the [Worker Performance](/develop/worker-performance#task-queues-processing-tuning) section. - Increase the Worker config for concurrent pollers for Workflow or Activity `task_slots`, if your Worker resources can accommodate the increased load. - Reference [Worker Performance > Poller Count](/develop/worker-performance#poller-count). - Increase the number of available Workers. :::warning Setting the [Schedule To Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout) in your Activity Options can skew your observations. Avoid setting a Schedule To Start Timeout when load testing for latency. ::: ## Detect greedy Worker resources {#detect-greedy-workers} **How to detect greedy Worker resources.** You can have too many Workers. If you see the Poll Success Rate showing low numbers, you might have too many resources polling Temporal Cloud. Metrics to monitor: - **Temporal Cloud metric**: [temporal_cloud_v0_poll_success_count](/cloud/metrics/reference#temporal_cloud_v0_poll_success_count) - **Temporal Cloud metric**: [temporal_cloud_v0_poll_success_sync_count](/cloud/metrics/reference#temporal_cloud_v0_poll_success_sync_count) - **Temporal Cloud metric**: [temporal_cloud_v0_poll_timeout_count](/cloud/metrics/reference#temporal_cloud_v0_poll_timeout_count) - **SDK metric**: [temporal_workflow_task_schedule_to_start_latency](/references/sdk-metrics#workflow_task_schedule_to_start_latency) - **SDK metric**: [temporal_activity_schedule_to_start_latency](/references/sdk-metrics#activity_schedule_to_start_latency) **Calculate Poll Success Rate** ``` (temporal_cloud_v0_poll_success_count + temporal_cloud_v0_poll_success_sync_count) / (temporal_cloud_v0_poll_success_count + temporal_cloud_v0_poll_success_sync_count + temporal_cloud_v0_poll_timeout_count) ``` **Target** Poll Success Rate should be >90% in most cases of systems with a steady load. For high volume and low latency, try to target >95%. **Interpretation** There may be too many Pollers for the amount of Workers. If you see all of the following at the same time then you might have too many Workers: - Low poll success rate - Low Schedule To Start latency - Low worker host resource utilization **Actions** Consider sizing down your Workers by either: - Reducing the number of Workers polling the impacted Task Queue, OR - Reducing the concurrent pollers per Worker, OR - Both of the above #### Prometheus query samples **poll_success_rate query** ``` ( ( sum by(temporal_namespace) ( rate( temporal_cloud_v0_poll_success_count{temporal_namespace=~"$namespace"}[5m] ) ) + sum by(temporal_namespace) ( rate( temporal_cloud_v0_poll_success_sync_count{temporal_namespace=~"$namespace"}[5m] ) ) ) / ( ( sum by(temporal_namespace) ( rate( temporal_cloud_v0_poll_success_count{temporal_namespace=~"$namespace"}[5m] ) ) + sum by(temporal_namespace) ( rate( temporal_cloud_v0_poll_success_sync_count{temporal_namespace=~"$namespace"}[5m] ) ) ) + sum by(temporal_namespace) ( rate( temporal_cloud_v0_poll_timeout_count{temporal_namespace=~"$namespace"}[5m] ) ) ) ) ``` ## Detect misconfigured Workers {#detect-misconfigured-workers} **How to detect misconfigured Workers.** Worker configuration can negatively affect Task processing efficiency. Metrics to monitor: - **SDK metric**: [temporal_worker_task_slots_available](/references/sdk-metrics#worker_task_slots_available) - **SDK metric**: [sticky_cache_size](/references/sdk-metrics#sticky_cache_size) - **SDK metric**: [sticky_cache_total_forced_eviction](/references/sdk-metrics#sticky_cache_total_forced_eviction) **Execution Size Configuration** The `maxConcurrentWorkflowTaskExecutionSize` and `maxConcurrentActivityExecutionSize` define the number of total available slots for the Worker. If this is set too low, the Worker will not be able to keep up processing Tasks. **Target** The `temporal_worker_task_slots_available` metric should always be >0. #### Prometheus query samples **Over Time** ``` avg_over_time(temporal_worker_task_slots_available{namespace="$namespace",worker_type="WorkflowWorker"}[10m]) ``` **Current Time** ``` temporal_worker_task_slots_available{namespace="default", worker_type="WorkflowWorker", task_queue="$task_queue_name"} ``` **Interpretation** You are likely experiencing a Task backlog if you are seeing inadequate slot counts frequently. The work is not getting processed as fast as it should/can. **Action** Increase the `maxConcurrentWorkflowTaskExecutionSize` and `maxConcurrentActivityExecutionSize` values and keep an eye on your Worker resource metrics (CPU utilization, etc) to make sure you haven't created a new issue. ### Configure Sticky Execution Cache {#configure-sticky-cache} Sticky Execution means that a Worker caches a Workflow Execution Event History and creates a dedicated Task Queue to listen on. It significantly improves performance because the Temporal Service only sends new events to the Worker instead of entire Event Histories. **Target** The `sticky_cache_size` should report less than or equal to your `WorkflowCacheSize` value. Also, sticky_cache_total_forced_eviction should not be reporting high numbers (relative). **Action** If you see a high eviction count, verify there are no other inefficiencies in your Worker configuration or resource provisioning (backlog). If you see the cache size metric exceed the `WorkflowCacheSize`, increase this value if your Worker resources can accommodate it or provision more Workers. Finally, take time to review [the Worker performance guide](/develop/worker-performance) and see if it addresses other potential cache issues. #### Prometheus query samples **Sticky Cache Size** ``` max_over_time(temporal_sticky_cache_size{namespace="$namespace"}[10m]) ``` **Sticky Cache Evictions** ``` rate(temporal_sticky_cache_total_forced_eviction_total{namespace="$namespace"}[5m])) ``` ## Manage Worker Heartbeating {#manage-worker-heartbeating} :::tip SUPPORT, STABILITY, and DEPENDENCY INFO This feature is currently in [Public Preview](/evaluate/development-production-features/release-stages#public-preview). ::: Workers send a heartbeat to Temporal Server every 60 seconds by default. This heartbeat serves to provide liveness and configuration data from the Worker to the Server. Specific data sent can be found in the [API](https://github.com/temporalio/api/blob/master/temporal/api/worker/v1/message.proto). By providing a consistent heartbeat from Workers, the Server can obtain an accurate count of Workers, understand Worker performance, and respond to Worker heartbeats with commands. Some examples of how this is useful: - understanding the difference between a Worker that is down and a Worker that is processing tasks for a long time - identifying a Worker with high CPU usage from the Server point of view Use the Temporal CLI to view information about all Workers connected to Temporal Server. Use `temporal worker describe` to see details of a specific Worker. Use `temporal worker list` to get a complete list of all connected Workers. If you wish to disable Worker heartbeating (features above will not work with this feature disabled) or set heartbeating to be more frequent than every 60 seconds (allowed range is 1s to 60s), set the configuration relevant to your SDK. Use `TelemetryConfig()` to adjust heartbeat settings. See the [Python SDK documentation](https://python.temporal.io/temporalio.bridge.runtime.RuntimeOptions.html#worker_heartbeat_interval_millis) for more details. Add configurations to `Runtime()` to adjust heartbeat settings. See the [Ruby SDK documentation](https://ruby.temporal.io/Temporalio/Runtime.html) for more details. --- ## How to visualize an Activity Retry Policy with timeouts Use this tool to visualize total Activity Execution times and experiment with different Activity timeouts and Retry Policies. For a list of Activity Task Execution times, use [this calculator](https://temporal-time.netlify.app/?initialInterval=1&maxInterval=100&maxReties=10&backoffCoeificent=2). The simulator is based on a common Activity use-case, which is to call a third party HTTP API and return the results. See the example code snippets below. Use the Activity Retries settings to configure how long the API request takes to succeed or fail. There is an option to generate scenarios. The _Task Time in Queue_ simulates the time the Activity Task might be waiting in the Task Queue. Use the Activity Timeouts and Retry Policy settings to see how they impact the success or failure of an Activity Execution. --- ## Asynchronous Activity completion - .NET SDK This page describes how to asynchronously complete an Activity. [Asynchronous Activity Completion](/activity-execution#asynchronous-activity-completion) enables the Activity Function to return without the Activity Execution completing. There are three steps to follow: 1. The Activity provides the external system with identifying information needed to complete the Activity Execution. Identifying information can be a [Task Token](/activity-execution#task-token), or a combination of Namespace, Workflow Id, and Activity Id. 2. The Activity Function completes in a way that identifies it as waiting to be completed by an external system. 3. The Temporal Client is used to Heartbeat and complete the Activity. To mark an Activity as completing asynchronously, do the following inside the Activity. ```csharp // Capture token for later completion capturedToken = ActivityExecutionContext.Current.Info.TaskToken; // Throw special exception that says an activity will be completed somewhere else throw new CompleteAsyncException(); ``` To update an Activity outside the Activity, use the [GetAsyncActivityHandle()](https://dotnet.temporal.io/api/Temporalio.Client.ITemporalClient.html#Temporalio_Client_ITemporalClient_GetAsyncActivityHandle_System_Byte___) method to get the handle of the Activity. ```csharp var handle = myClient.GetAsyncActivityHandle(capturedToken); ``` Then, on that handle, you can call the results of the Activity, `HeartbeatAsync`, `CompleteAsync`, `FailAsync`, or `ReportCancellationAsync` method to update the Activity. ```csharp await handle.CompleteAsync("Completion value."); ``` --- ## Benign exceptions - .NET SDK **How to mark an Activity error as benign using the Temporal .NET SDK** When Activities throw errors that are expected or not severe, they can create noise in your logs, metrics, and OpenTelemetry traces, making it harder to identify real issues. By marking these errors as benign, you can exclude them from your observability data while still handling them in your Workflow logic. To mark an error as benign, set the `category` parameter to `ApplicationErrorCategory.Benign` when throwing an [`ApplicationFailureException`](https://dotnet.temporal.io/api/Temporalio.Exceptions.ApplicationFailureException.html). Benign errors: - Have Activity failure logs downgraded to DEBUG level - Do not emit Activity failure metrics - Do not set the OpenTelemetry failure status to ERROR ```csharp using Temporalio.Activities; using Temporalio.Api.Enums.V1; using Temporalio.Exceptions; public class MyActivities { [Activity] public async Task MyActivityAsync() { try { return await CallExternalServiceAsync(); } catch (Exception e) { // Mark this error as benign since it's expected throw new ApplicationFailureException( "Service is down", inner: e, category: ApplicationErrorCategory.Benign); } } } ``` Use benign exceptions for Activity errors that occur regularly as part of normal operations, such as polling an external service that isn't ready yet, or handling expected transient failures that will be retried. --- ## Interrupt a Workflow - .NET SDK This page shows how to interrupt a Workflow Execution. You can interrupt a Workflow Execution in one of the following ways: - [Cancel](#cancellation): Canceling a Workflow provides a graceful way to stop Workflow Execution. - [Terminate](#termination): Terminating a Workflow forcefully stops Workflow Execution. Terminating a Workflow forcefully stops Workflow Execution. This action resembles killing a process. - The system records a `WorkflowExecutionTerminated` event in the Workflow History. - The termination forcefully and immediately stops the Workflow Execution. - The Workflow code gets no chance to handle termination. - A Workflow Task doesn't get scheduled. In most cases, canceling is preferable because it allows the Workflow to finish gracefully. Terminate only if the Workflow is stuck and cannot be canceled normally. ## Cancellation {#cancellation} To give a Workflow and its Activities the ability to be cancelled, do the following: - Handle a Cancellation request within a Workflow. - Set Activity Heartbeat Timeouts. - Listen for and handle a Cancellation request within an Activity. - Send a Cancellation request from a Temporal Client. ### Handle Cancellation in Workflow {#handle-cancellation-in-workflow} **How to handle a Cancellation in a Workflow in .NET.** Workflow Definitions can be written to respond to cancellation requests. It is common for an Activity to be run on Cancellation to perform cleanup. Cancellation Requests on Workflows cancel the `Workflow.CancellationToken`. This is the token that is implicitly used for all calls within the workflow as well (e.g. Timers, Activities, etc) and therefore cancellation is propagated to them to be handled and bubble out. ```csharp [WorkflowRun] public async Task RunAsync() { try { // Whether this workflow waits on the activity to handle the cancellation or not is // dependent upon the CancellationType option. We leave the default here which sends the // cancellation but does not wait on it to be handled. await Workflow.ExecuteActivityAsync( (MyActivities a) => a.MyNormalActivity(), new() { ScheduleToCloseTimeout = TimeSpan.FromMinutes(5) }); } catch (Exception e) when (TemporalException.IsCanceledException(e)) { // The "when" clause above is because we only want to apply the logic to cancellation, but // this kind of cleanup could be done on any/all exceptions too. Workflow.Logger.LogError(e, "Cancellation occurred, performing cleanup"); // Call cleanup activity. If this throws, it will swallow the original exception which we // are ok with here. This could be changed to just log a failure and let the original // cancellation continue. We use a different cancellation token since the default one on // Workflow.CancellationToken is now marked cancelled. using var detachedCancelSource = new CancellationTokenSource(); await Workflow.ExecuteActivityAsync( (MyActivities a) => a.MyCancellationCleanupActivity(), new() { ScheduleToCloseTimeout = TimeSpan.FromMinutes(5), CancellationToken = detachedCancelSource.Token; }); // Rethrow the cancellation throw; } } ``` ### Handle Cancellation in an Activity {#handle-cancellation-in-an-activity} **How to handle a Cancellation in an Activity using the Temporal .NET SDK** Ensure that the Activity is [Heartbeating](/develop/dotnet/failure-detection#activity-heartbeats) to receive the Cancellation request and stop execution. Also make sure that the [Heartbeat Timeout](/develop/dotnet/failure-detection#heartbeat-timeout) is set on the Activity Options when calling from the Workflow. An Activity Cancellation Request cancels the `CancellationToken` on the `ActivityExecutionContext`. ```csharp [Activity] public async Task MyActivityAsync() { // This is a naive loop simulating work, but similar heartbeat/cancellation logic applies to // other scenarios as well while (true) { // Send heartbeat ActivityExecutionContext.Current.Heartbeat(); // Do some work, passing the cancellation token await Task.Delay(1000, ActivityExecutionContext.Current.CancellationToken); } } ``` ### Request Cancellation {#request-cancellation} **How to request Cancellation of a Workflow using the Temporal .NET SDK** Use `CancelAsync` on the `WorkflowHandle` to cancel a Workflow Execution. ```csharp // Get a workflow handle by its workflow ID. This could be made specific to a run by passing run ID. // This could also just be a handle that is returned from StartWorkflowAsync instead. var handle = myClient.GetWorkflowHandle("my-workflow-id"); // Send cancellation. This returns when cancellation is received by the server. Wait on the handle's // result to wait for cancellation to be applied. await handle.CancelAsync(); ``` **How to request Cancellation of an Activity in .NET using the Temporal .NET SDK** By default, Activities are automatically cancelled when the Workflow is cancelled since the workflow cancellation token is used by activities by default. To issue a cancellation explicitly, a new cancellation token can be created. ```csharp [WorkflowRun] public async Task RunAsync() { // Create a source linked to workflow cancellation. A new source could be created instead if we // didn't want it associated with workflow cancellation. using var cancelActivitySource = CancellationTokenSource.CreateLinkedTokenSource( Workflow.CancellationToken); // Start the activity. Whether this workflow waits on the activity to handle the cancellation // or not is dependent upon the CancellationType option. We leave the default here which sends // the cancellation but does not wait on it to be handled. var activityTask = Workflow.ExecuteActivityAsync( (MyActivities a) => a.MyNormalActivity(), new() { ScheduleToCloseTimeout = TimeSpan.FromMinutes(5), CancellationToken = cancelActivitySource.Token; }); activityTask.Start(); // Wait 5 minutes, then cancel it await Workflow.DelayAsync(TimeSpan.FromMinutes(5)); cancelActivitySource.Cancel(); // Wait on the activity which will throw cancellation which will fail the workflow await activityTask; } ``` ## Termination {#termination} **How to Terminate a Workflow Execution in .NET using the Temporal .NET SDK** To Terminate a Workflow Execution in .NET, use the [TerminateAsync()](https://dotnet.temporal.io/api/Temporalio.Client.WorkflowHandle.html#Temporalio_Client_WorkflowHandle_TerminateAsync_System_String_Temporalio_Client_WorkflowTerminateOptions_) method on the Workflow handle. ```csharp // Get a workflow handle by its workflow ID. This could be made specific to a run by passing run ID. // This could also just be a handle that is returned from StartWorkflowAsync instead. var handle = myClient.GetWorkflowHandle("my-workflow-id"); // Terminate await handle.TerminateAsync(); ``` Workflow Executions can also be Terminated directly from the WebUI. In this case, a custom note can be logged from the UI when that happens. ## Reset a Workflow Execution {#reset} Resetting a Workflow Execution terminates the current Workflow Execution and starts a new Workflow Execution from a point you specify in its Event History. Use reset when a Workflow is blocked due to a non-deterministic error or other issues that prevent it from completing. When you reset a Workflow, the Event History up to the reset point is copied to the new Workflow Execution, and the Workflow resumes from that point with the current code. Reset only works if you've fixed the underlying issue, such as removing non-deterministic code. Any progress made after the reset point will be discarded. Provide a reason when resetting, as it will be recorded in the Event History. 1. Navigate to the Workflow Execution details page, 2. Click the **Reset** button in the top right dropdown menu, 3. Select the Event ID to reset to, 4. Provide a reason for the reset, 5. Confirm the reset. The Web UI shows available reset points and creates a link to the new Workflow Execution after the reset completes. Use the `temporal workflow reset` command to reset a Workflow Execution: ```bash temporal workflow reset \ --workflow-id \ --event-id \ --reason "Reason for reset" ``` For example: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" ``` By default, the command resets the latest Workflow Execution in the `default` Namespace. Use `--run-id` to reset a specific run. Use `--namespace` to specify a different Namespace: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" \ --namespace my-namespace \ --tls-cert-path /path/to/cert.pem \ --tls-key-path /path/to/key.pem ``` Monitor the new Workflow Execution after resetting to ensure it completes successfully. --- ## Child Workflows - .NET SDK This page shows how to do the following: - [Start a Child Workflow Execution](#child-workflows) - [Set a Parent Close Policy](#parent-close-policy) ## Start a Child Workflow Execution {#child-workflows} **How to start a Child Workflow Execution using the Temporal .NET SDK** A [Child Workflow Execution](/child-workflows) is a Workflow Execution that is scheduled from within another Workflow using a Child Workflow API. When using a Child Workflow API, Child Workflow related Events ([StartChildWorkflowExecutionInitiated](/references/events#startchildworkflowexecutioninitiated), [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted), [ChildWorkflowExecutionCompleted](/references/events#childworkflowexecutioncompleted), etc...) are logged in the Workflow Execution Event History. Always block progress until the [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted) Event is logged to the Event History to ensure the Child Workflow Execution has started. After that, Child Workflow Executions may be abandoned using the _Abandon_ [Parent Close Policy](/parent-close-policy) set in the Child Workflow Options. To spawn a Child Workflow Execution in .NET, use the `ExecuteChildWorkflowAsync()` method which starts the Child Workflow and waits for completion or use the `StartChildWorkflowAsync()` method to start a Child Workflow and return its handle. This is useful if you want to do something after it has only started, or to get the Workflow/Run ID, or to be able to signal it while running. :::note `ExecuteChildWorkflowAsync()` is a helper method for `StartChildWorkflowAsync()` plus `await handle.GetResultAsync()`. ::: ```csharp await Workflow.ExecuteChildWorkflowAsync((MyChildWorkflow wf) => wf.RunAsync()); ``` ## Set a Parent Close Policy {#parent-close-policy} **How to set a Parent Close Policy using the Temporal .NET SDK** A [Parent Close Policy](/parent-close-policy) determines what happens to a Child Workflow Execution if its Parent changes to a Closed status (Completed, Failed, or Timed Out). The default Parent Close Policy option is set to terminate the Child Workflow Execution. Set the `ParentClosePolicy` property inside the [`ChildWorkflowOptions`](https://dotnet.temporal.io/api/Temporalio.Workflows.ChildWorkflowOptions.html) for `ExecuteChildWorkflowAsync` or `StartChildWorkflowAsync` to specify the behavior of the Child Workflow when the Parent Workflow closes. ```csharp await Workflow.ExecuteChildWorkflowAsync( (MyChildWorkflow wf) => wf.RunAsync(), new() { ParentClosePolicy = ParentClosePolicy.Abandon }); ``` --- ## Continue-As-New - .NET SDK This page answers the following questions for .NET developers: - [What is Continue-As-New?](#what) - [How to Continue-As-New?](#how) - [When is it right to Continue-as-New?](#when) - [How to test Continue-as-New?](#how-to-test) ## What is Continue-As-New? {#what} [Continue-As-New](/workflow-execution/continue-as-new) lets a Workflow Execution close successfully and creates a new Workflow Execution. You can think of it as a checkpoint when your Workflow gets too long or approaches certain scaling limits. The new Workflow Execution is in the same [chain](/workflow-execution#workflow-execution-chain); it keeps the same Workflow Id but gets a new Run Id and a fresh Event History. It also receives your Workflow's usual parameters. ## How to Continue-As-New using the .NET SDK {#how} First, design your Workflow parameters so that you can pass in the "current state" when you Continue-As-New into the next Workflow run. This state is typically set to `None` for the original caller of the Workflow. View the source code {' '} in the context of the rest of the application code. ```csharp public record Input { public State State { get; init; } = new(); public bool TestContinueAsNew { get; init; } } [WorkflowInit] public ClusterManagerWorkflow(Input input) ```` The test hook in the above snippet is covered [below](#how-to-test). Inside your Workflow, throw a [`CreateContinueAsNewException`](https://dotnet.temporal.io/api/Temporalio.Workflows.ContinueAsNewException.html) exception. This stops the Workflow right away and starts a new one. View the source code {' '} in the context of the rest of the application code. ```csharp throw Workflow.CreateContinueAsNewException((ClusterManagerWorkflow wf) => wf.RunAsync(new() { State = CurrentState, TestContinueAsNew = input.TestContinueAsNew, })); ```` ### Considerations for Workflows with Message Handlers {#with-message-handlers} If you use Updates or Signals, don't call Continue-as-New from the handlers. Instead, wait for your handlers to finish in your main Workflow before you throw `CreateContinueAsNewException`. See the [`AllHandlersFinished`](message-passing#wait-for-message-handlers) example for guidance. ## When is it right to Continue-as-New using the .NET SDK? {#when} Use Continue-as-New when your Workflow might hit [Event History Limits](/workflow-execution/event#event-history). Temporal tracks your Workflow's progress against these limits to let you know when you should Continue-as-New. Call `Workflow.ContinueAsNewSuggested` to check if it's time. ## How to test Continue-as-New using the .NET SDK {#how-to-test} Testing Workflows that naturally Continue-as-New may be time-consuming and resource-intensive. Instead, add a test hook to check your Workflow's Continue-as-New behavior faster in automated tests. For example, when `TestContinueAsNew == true`, this sample creates a test-only variable called `maxHistoryLength` and sets it to a small value. A helper variable in the Workflow checks it each time it considers using Continue-as-New: View the source code {' '} in the context of the rest of the application code. ```csharp private bool ShouldContinueAsNew => // Don't continue as new while update running Workflow.AllHandlersFinished && // Continue if suggested or, for ease of testing, max history reached (Workflow.ContinueAsNewSuggested || Workflow.CurrentHistoryLength > maxHistoryLength); ``` --- ## Converters and encryption - .NET SDK Temporal's security model is designed around client-side encryption of Payloads. A client may encrypt Payloads before sending them to the server, and decrypt them after receiving them from the server. This provides a high degree of confidentiality because the Temporal Server itself has absolutely no knowledge of the actual data. It also gives implementers more power and more freedom regarding which client is able to read which data -- they can control access with keys, algorithms, or other security measures. A Temporal developer adds client-side encryption of Payloads by providing a Custom Payload Codec to its Client. Depending on business needs, a complete implementation of Payload Encryption may involve selecting appropriate encryption algorithms, managing encryption keys, restricting a subset of their users from viewing payload output, or a combination of these. The server itself never adds encryption over Payloads. Therefore, unless client-side encryption is implemented, Payload data will be persisted in non-encrypted form to the data store, and any Client that can make requests to a Temporal namespace (including the Temporal UI and CLI) will be able to read Payloads contained in Workflows. When working with sensitive data, you should always implement Payload encryption. ## Custom Payload Codec {#custom-payload-codec} **How to use a custom Payload Codec using the .NET SDK** Custom Data Converters can change the default Temporal Data Conversion behavior by adding hooks, sending payloads to external storage, or performing different encoding steps. If you only need to change the encoding performed on your payloads -- by adding compression or encryption -- you can override the default Data Converter to use a new `PayloadCodec`. The `IPayloadCodec` needs to implement `EncodeAsync()` and `DecodeAsync()` methods. These should convert the given payloads as needed into new payloads, using the `"encoding"` metadata field. Do not mutate the existing payloads. Here is an example of an encryption codec that just uses base64 in each direction: ```csharp public class EncryptionCodec : IPayloadCodec { public Task> EncodeAsync(IReadOnlyCollection payloads) => Task.FromResult>(payloads.Select(p => { return new Payload() { // Set our specific encoding. We may also want to add a key ID in here for use by // the decode side Metadata = { ["encoding"] = "binary/my-payload-encoding" }, Data = ByteString.CopyFrom(Encrypt(p.ToByteArray())), }; }).ToList()); public Task> DecodeAsync(IReadOnlyCollection payloads) => Task.FromResult>(payloads.Select(p => { // Ignore if it doesn't have our expected encoding if (p.Metadata.GetValueOrDefault("encoding") != "binary/my-payload-encoding") { return p; } // Decrypt return Payload.Parser.ParseFrom(Decrypt(p.Data.ToByteArray())); }).ToList()); private byte[] Encrypt(byte[] data) => Encoding.ASCII.GetBytes(Convert.ToBase64String(data)); private byte[] Decrypt(byte[] data) => Convert.FromBase64String(Encoding.ASCII.GetString(data)); } ``` **Set Data Converter to use custom Payload Codec** When creating a client, the default `DataConverter` can be updated with the payload codec like so: ```csharp var myClient = await TemporalClient.ConnectAsync(new("localhost:7233") { DataConverter = DataConverter.Default with { PayloadCodec = new EncryptionCodec() }, }); ``` - Data **encoding** is performed by the client using the converters and codecs provided by Temporal or your custom implementation when passing input to the Temporal Cluster. For example, plain text input is usually serialized into a JSON object, and can then be compressed or encrypted. - Data **decoding** may be performed by your application logic during your Workflows or Activities as necessary, but decoded Workflow results are never persisted back to the Temporal Cluster. Instead, they are stored encoded on the Cluster, and you need to provide an additional parameter when using the [temporal workflow show](/cli/workflow#show) command or when browsing the Web UI to view output. For reference, see the [Encryption](https://github.com/temporalio/samples-dotnet/tree/main/src/Encryption) sample. ### Using a Codec Server A Codec Server is an HTTP server that uses your custom Codec logic to decode your data remotely. The Codec Server is independent of the Temporal Cluster and decodes your encrypted payloads through predefined endpoints. You create, operate, and manage access to your Codec Server in your own environment. The Temporal CLI and the Web UI in turn provide built-in hooks to call the Codec Server to decode encrypted payloads on demand. Refer to the [Codec Server](/production-deployment/data-encryption) documentation for information on how to design and deploy a Codec Server. ## Payload conversion Temporal SDKs provide a default [Payload Converter](/payload-converter) that can be customized to convert a custom data type to [Payload](/dataconversion#payload) and back. ### Conversion sequence {#conversion-sequence} The order in which your encoding Payload Converters are applied depend on the order given to the Data Converter. You can set multiple encoding Payload Converters to run your conversions. When the Data Converter receives a value for conversion, it passes through each Payload Converter in sequence until the converter that handles the data type does the conversion. Payload Converters can be customized independently of a Payload Codec. Temporal's Converter architecture looks like this: ### Custom Payload Converter {#custom-payload-converter} **How to use a custom Payload Converter with the .NET SDK.** Data converters are used to convert raw Temporal payloads to/from actual .NET types. A custom data converter can be set via the `DataConverter` option when creating a client. Data converters are a combination of payload converters, payload codecs, and failure converters. Payload converters convert .NET values to/from serialized bytes. Payload codecs convert bytes to bytes (e.g. for compression or encryption). Failure converters convert exceptions to/from serialized failures. Data converters are in the `Temporalio.Converters` namespace. The default data converter uses a default payload converter, which supports the following types: - `null` - `byte[]` - `Google.Protobuf.IMessage` instances - Anything that `System.Text.Json` supports - `IRawValue` as unconverted raw payloads Custom converters can be created for all uses. For example, to create client with a data converter that converts all C# property names to camel case, you would: ```csharp using System.Text.Json; using Temporalio.Client; using Temporalio.Converters; public class CamelCasePayloadConverter : DefaultPayloadConverter { public CamelCasePayloadConverter() : base(new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }) { } } var client = await TemporalClient.ConnectAsync(new() { TargetHost = "localhost:7233", Namespace = "my-namespace", DataConverter = DataConverter.Default with { PayloadConverter = new CamelCasePayloadConverter() }, }); ``` --- ## Core application - .NET SDK This page shows how to do the following: - [Develop a basic Workflow Definition](#develop-workflow) - [Develop a basic Activity Definition](#develop-activity) - [Start an Activity from a Workflow](#activity-execution) - [Run a Worker Process](#run-worker-process) - [Set a Dynamic Workflow](#set-a-dynamic-workflow) - [Set a Dynamic Activity](#set-a-dynamic-activity) ## Develop a Workflow {#develop-workflow} **How to develop a basic Workflow using the Temporal .NET SDK** Workflows are the fundamental unit of a Temporal Application, and it all starts with the development of a [Workflow Definition](/workflow-definition). In the Temporal .NET SDK programming model, Workflows are defined as classes. Specify the `[Workflow]` attribute from the `Temporalio.Workflows` namespace on the Workflow class to identify a Workflow. Use the `[WorkflowRun]` attribute to mark the entry point method to be invoked. This must be set on one asynchronous method defined on the same class as `[Workflow]`. ```csharp using Temporalio.Workflows; [Workflow] public class MyWorkflow { [WorkflowRun] public async Task RunAsync(string name) { var param = MyActivityParams("Hello", name); return await Workflow.ExecuteActivityAsync( (MyActivities a) => a.MyActivity(param), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) }); } } ``` Temporal Workflows may have any number of custom parameters. However, we strongly recommend that objects are used as parameters, so that the object's individual fields may be altered without breaking the signature of the Workflow. All Workflow Definition parameters must be serializable. ### Workflow logic requirements {#workflow-logic-requirements} Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). Therefore, each language is limited to the use of certain idiomatic techniques. However, each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. This means there are several things Workflows cannot do such as: - Perform IO (network, disk, stdio, etc) - Access/alter external mutable state - Do any threading - Do anything using the system clock (e.g. `DateTime.Now`) - This includes .NET timers (e.g. `Task.Delay` or `Thread.Sleep`) - Make any random calls - Make any not-guaranteed-deterministic calls (e.g. iterating over a dictionary) #### .NET Task Determinism Some calls in .NET do unsuspecting non-deterministic things and are easy to accidentally use. This is especially true with `Task`s. Temporal requires that the deterministic `TaskScheduler.Current` is used, but many .NET async calls will use `TaskScheduler.Default` implicitly (and some analyzers even encourage this). Here are some known gotchas to avoid with .NET tasks inside of Workflows: - Do not use `Task.Run` - this uses the default scheduler and puts work on the thread pool. - Use `Workflow.RunTaskAsync` instead. - Can also use `Task.Factory.StartNew` with current scheduler or instantiate the `Task` and run `Task.Start` on it. - Do not use `Task.ConfigureAwait(false)` - this will not use the current context. - If you must use `Task.ConfigureAwait`, use `Task.ConfigureAwait(true)`. - There is no significant performance benefit to `Task.ConfigureAwait` in workflows anyways due to how the scheduler works. - Do not use anything that defaults to the default task scheduler. - Do not use `Task.Delay`, `Task.Wait`, timeout-based `CancellationTokenSource`, or anything that uses .NET built-in timers. - `Workflow.DelayAsync`, `Workflow.WaitConditionAsync`, or non-timeout-based cancellation token source is suggested. - Do not use `Task.WhenAny`. - Use `Workflow.WhenAnyAsync` instead. - Technically this only applies to an enumerable set of tasks with results or more than 2 tasks with results. Other uses are safe. See [this issue](https://github.com/dotnet/runtime/issues/87481). - Do not use `Task.WhenAll` - Use `Workflow.WhenAllAsync` instead. - Technically `Task.WhenAll` is currently deterministic in .NET and safe, but it is better to use the wrapper to be sure. - Do not use `CancellationTokenSource.CancelAsync`. - Use `CancellationTokenSource.Cancel` instead. - Do not use `System.Threading.Semaphore` or `System.Threading.SemaphoreSlim` or `System.Threading.Mutex`. - Use `Temporalio.Workflows.Semaphore` or `Temporalio.Workflows.Mutex` instead. - _Technically_ `SemaphoreSlim` does work if only the async form of `WaitAsync` is used without no timeouts and `Release` is used. But anything else can deadlock the workflow and its use is cumbersome since it must be disposed. - Be wary of additional libraries' implicit use of the default scheduler. - For example, while there are articles for `Dataflow` about [using a specific scheduler](https://learn.microsoft.com/en-us/dotnet/standard/parallel-programming/how-to-specify-a-task-scheduler-in-a-dataflow-block), there are hidden implicit uses of `TaskScheduler.Default`. For example, see [this bug](https://github.com/dotnet/runtime/issues/83159). In order to help catch wrong scheduler use, by default the Temporal .NET SDK adds an event source listener for info-level task events. While this technically receives events from all uses of tasks in the process, we make sure to ignore anything that is not running in a Workflow in a high performant way (basically one thread local check). For code that does run in a Workflow and accidentally starts a task in another scheduler, an `InvalidWorkflowOperationException` will be thrown which "pauses" the Workflow (fails the Workflow Rask which continually retries until the code is fixed). This is unfortunately a runtime-only check, but can help catch mistakes early. If this needs to be turned off for any reason, set `DisableWorkflowTracingEventListener` to `true` in Worker options. In the near future for modern .NET versions we hope to use the [new `TimeProvider` API](https://github.com/dotnet/runtime/issues/36617) which will allow us to control current time and timers. #### Workflow .editorconfig Since Workflow code follows some different logic rules than regular C# code, there are some common analyzer rules that developers may want to disable. To ensure these are only disabled for Workflows, current recommendation is to use the `.workflow.cs` extension for files containing Workflows. Here are the rules to disable: - [CA1024](https://learn.microsoft.com/en-us/dotnet/fundamentals/code-analysis/quality-rules/ca1024) - This encourages properties instead of methods that look like getters. However for reflection reasons we cannot use property getters for queries, so it is very normal to have ```csharp [WorkflowQuery] public string GetSomeThing() => someThing; ``` - [CA1822](https://learn.microsoft.com/en-us/dotnet/fundamentals/code-analysis/quality-rules/ca1822) - This encourages static methods when methods don't access instance state. Workflows however use instance methods for run, Signals, Queries, or Updates even if they could be static. - [CA2007](https://learn.microsoft.com/en-us/dotnet/fundamentals/code-analysis/quality-rules/ca2007) - This encourages users to use `ConfigureAwait` instead of directly waiting on a task. But in Workflows, there is no benefit to this and it just adds noise (and if used, needs to be `ConfigureAwait(true)` not `ConfigureAwait(false)`). - [CA2008](https://learn.microsoft.com/en-us/dotnet/fundamentals/code-analysis/quality-rules/ca2008) - This encourages users to always apply an explicit task scheduler because the default of `TaskScheduler.Current` is bad. But for Workflows, the default of `TaskScheduler.Current` is good/required. - [CA5394](https://learn.microsoft.com/en-us/dotnet/fundamentals/code-analysis/quality-rules/ca5394) - This discourages use of non-crypto random. But deterministic Workflows, via `Workflow.Random` intentionally provide a deterministic non-crypto random instance. - `CS1998` - This discourages use of `async` on async methods that don't `await`. But Workflows handlers like Signals are often easier to write in one-line form this way, e.g. `public async Task SignalSomethingAsync(string value) => this.value = value;`. - [VSTHRD105](https://github.com/microsoft/vs-threading/blob/main/doc/analyzers/VSTHRD105.md) - This is similar to `CA2008` above in that use of implicit current scheduler is discouraged. That does not apply to Workflows where it is encouraged/required. Here is the `.editorconfig` snippet for the above which may frequently change as more analyzers need to be adjusted: ```ini ##### Configuration specific for Temporal workflows ##### [*.workflow.cs] --- # We use getters for queries, they cannot be properties dotnet_diagnostic.CA1024.severity = none --- # Don't force workflows to have static methods dotnet_diagnostic.CA1822.severity = none --- # Do not need ConfigureAwait for workflows dotnet_diagnostic.CA2007.severity = none --- # Do not need task scheduler for workflows dotnet_diagnostic.CA2008.severity = none --- # Workflow randomness is intentionally deterministic dotnet_diagnostic.CA5394.severity = none --- # Allow async methods to not have await in them dotnet_diagnostic.CS1998.severity = none --- # Don't avoid, but rather encourage things using TaskScheduler.Current in workflows dotnet_diagnostic.VSTHRD105.severity = none ``` ### Customize Workflow Type {#workflow-type} **How to customize your Workflow Type name using the Temporal .NET SDK** Workflows have a Type that are referred to as the Workflow name. The following examples demonstrate how to set a custom name for your Workflow Type. You can customize the Workflow name with a custom name in the attribute. For example, `[Workflow("my-workflow-name")]`. If the name parameter is not specified, the Workflow name defaults to the unqualified class name. ```csharp using Temporalio.Workflows; [Workflow("MyDifferentWorkflowName")] public class MyWorkflow { public async Task RunAsync(string name) { var param = MyActivityParams("Hello", name); return await Workflow.ExecuteActivityAsync( (MyActivities a) => a.MyActivity(param), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) }); } } ``` ## Develop an Activity {#develop-activity} **How to develop a basic Activity using the Temporal .NET SDK** One of the primary things that Workflows do is orchestrate the execution of Activities. An Activity is a normal method execution that's intended to execute a single, well-defined action (either short or long-running), such as querying a database, calling a third-party API, or transcoding a media file. An Activity can interact with world outside the Temporal Platform or use a Temporal Client to interact with a Temporal Service. For the Workflow to be able to execute the Activity, we must define the [Activity Definition](/activity-definition). You can develop an Activity Definition by using the `[Activity]` attribute from the `Temporalio.Activities` namespace on the method. To register a method as an Activity with a custom name, use an attribute parameter, for example `[Activity("your-activity")]`. Otherwise, the activity name is the unqualified method name (sans an "Async" suffix if the method is async). Activities can be asynchronous or synchronous. ```csharp using Temporalio.Activities; public class MyActivities { // Activities can be async and/or static too. We just demonstrate instance methods since many // use them that way. [Activity] public string MyActivity(MyActivityParams input) => $"{input.Greeting}, {input.Name}!"; } ``` There is no explicit limit to the total number of parameters that an [Activity Definition](/activity-definition) may support. However, there is a limit to the total size of the data that ends up encoded into a gRPC message Payload. A single argument is limited to a maximum size of 2 MB. And the total size of a gRPC message, which includes all the arguments, is limited to a maximum of 4 MB. Also, keep in mind that all Payload data is recorded in the [Workflow Execution Event History](/workflow-execution/event#event-history) and large Event Histories can affect Worker performance. This is because the entire Event History could be transferred to a Worker Process with a [Workflow Task](/tasks#workflow-task). Some SDKs require that you pass context objects, others do not. When it comes to your application data—that is, data that is serialized and encoded into a Payload—we recommend that you use a single object as an argument that wraps the application data passed to Activities. This is so that you can change what data is passed to the Activity without breaking a method signature. Activity parameters are the method parameters of the method with the `[Activity]` attribute. These can be any data type Temporal can convert, including records. Technically this can be multiple parameters, but Temporal strongly encourages a single parameter containing all input fields. ## Start Activity Execution {#activity-execution} **How to start an Activity Execution using the Temporal .NET SDK** Calls to spawn [Activity Executions](/activity-execution) are written within a [Workflow Definition](/workflow-definition). The call to spawn an Activity Execution generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command. This results in the set of three [Activity Task](/tasks#activity-task) related Events ([ActivityTaskScheduled](/references/events#activitytaskscheduled), [ActivityTaskStarted](/references/events#activitytaskstarted), and ActivityTask[Closed])in your Workflow Execution Event History. A single instance of the Activities implementation is shared across multiple simultaneous Activity invocations. Activity implementation code should be _idempotent_. The values passed to Activities through invocation parameters or returned through a result value are recorded in the Execution history. The entire Execution history is transferred from the Temporal service to Workflow Workers when a Workflow state needs to recover. A large Execution history can thus adversely impact the performance of your Workflow. Therefore, be mindful of the amount of data you transfer through Activity invocation parameters or Return Values. Otherwise, no additional limitations exist on Activity implementations. To spawn an Activity Execution, use the `ExecuteActivityAsync` operation from within your Workflow Definition. ```csharp using Temporalio.Workflows; [Workflow] public class MyWorkflow { public async Task RunAsync(string name) { var param = MyActivityParams("Hello", name); return await Workflow.ExecuteActivityAsync( (MyActivities a) => a.MyActivity(param), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) }); } } ``` Activity Execution semantics rely on several parameters. The only required value that needs to be set is either a [Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout) or a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout). These values are set in the Activity Options. ### Get Activity Execution results {#get-activity-results} **How to get the results of an Activity Execution using the Temporal .NET SDK** The Activity result is the returned in the task from the `ExecuteActivityAsync` call. ## Run Worker Process **How to create and run a Worker Process using the Temporal .NET SDK** The [Worker Process](/workers#worker-process) is where Workflow Functions and Activity Functions are executed. - Each [Worker Entity](/workers#worker-entity) in the Worker Process must register the exact Workflow Types and Activity Types it may execute. - Each Worker Entity must also associate itself with exactly one [Task Queue](/task-queue). - Each Worker Entity polling the same Task Queue must be registered with the same Workflow Types and Activity Types. A [Worker Entity](/workers#worker-entity) is the component within a Worker Process that listens to a specific Task Queue. Although multiple Worker Entities can be in a single Worker Process, a single Worker Entity Worker Process may be perfectly sufficient. For more information, see the [Worker tuning guide](/develop/worker-performance). A Worker Entity contains a Workflow Worker and/or an Activity Worker, which makes progress on Workflow Executions and Activity Executions, respectively. To develop a Worker, create a new `Temporalio.Worker.TemporalWorker` providing the Client and worker options which include Task Queue, Workflows, and Activities and more. The following code example creates a Worker that polls for tasks from the Task Queue and executes the Workflow. When a Worker is created, it accepts a list of Workflows, a list of Activities, or both. ```csharp // Create a client to localhost on default namespace var client = await TemporalClient.ConnectAsync(new("localhost:7233") { LoggerFactory = LoggerFactory.Create(builder => builder. AddSimpleConsole(options => options.TimestampFormat = "[HH:mm:ss] "). SetMinimumLevel(LogLevel.Information)), }); // Cancellation token cancelled on ctrl+c using var tokenSource = new CancellationTokenSource(); Console.CancelKeyPress += (_, eventArgs) => { tokenSource.Cancel(); eventArgs.Cancel = true; }; // Create an activity instance with some state var activities = new MyActivities(); // Run worker until cancelled Console.WriteLine("Running worker"); using var worker = new TemporalWorker( client, new TemporalWorkerOptions(taskQueue: "my-task-queue"). AddAllActivities(activities). AddWorkflow()); try { await worker.ExecuteAsync(tokenSource.Token); } catch (OperationCanceledException) { Console.WriteLine("Worker cancelled"); } ``` All Workers listening to the same Task Queue name must be registered to handle the exact same Workflows Types and Activity Types. If a Worker polls a Task for a Workflow Type or Activity Type it does not know about, it fails that Task. However, the failure of the Task does not cause the associated Workflow Execution to fail. ### Worker Processes with host builder and dependency injection The [Temporalio.Extensions.Hosting](https://github.com/temporalio/sdk-dotnet/tree/main/src/Temporalio.Extensions.Hosting) extension exists for .NET developers to support HostBuilder and Dependency Injection approaches. To create the same worker as before using this approach: ```csharp var host = Host.CreateDefaultBuilder(args) .ConfigureLogging(ctx => ctx.AddSimpleConsole().SetMinimumLevel(LogLevel.Information)) .ConfigureServices(ctx => ctx. // Add the database client at the scoped level AddScoped(). // Add the worker AddHostedTemporalWorker( clientTargetHost: "localhost:7233", clientNamespace: "default", taskQueue: "my-task-queue"). // Add the activities class at the scoped level AddScopedActivities(). AddWorkflow()) .Build(); await host.RunAsync(); ``` ## Set a Dynamic Workflow {#set-a-dynamic-workflow} **How to set a Dynamic Workflow using the Temporal .NET SDK** A Dynamic Workflow in Temporal is a Workflow that is invoked dynamically at runtime if no other Workflow with the same name is registered. A Workflow can be made dynamic by setting `Dynamic` as `true` on the `[Workflow]` attribute. You must register the Workflow with the Worker before it can be invoked. Only one Dynamic Workflow can be present on a Worker. The Workflow Definition must then accept a single argument of type `Temporalio.Converters.IRawValue[]`. The [Workflow.PayloadConverter](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_PayloadConverter) property is used to convert an `IRawValue` object to the desired type using extension methods in the `Temporalio.Converters` namespace. ```csharp [Workflow(Dynamic = true)] public class DynamicWorkflow { [WorkflowRun] public async Task RunAsync(IRawValue[] args) { var name = Workflow.PayloadConverter.ToValue(args.Single()); var param = MyActivityParams("Hello", name); return await Workflow.ExecuteActivityAsync( (MyActivities a) => a.MyActivity(param), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) }); } } ``` ## Set a Dynamic Activity {#set-a-dynamic-activity} **How to set a Dynamic Activity using the Temporal .NET SDK** A Dynamic Activity in Temporal is an Activity that is invoked dynamically at runtime if no other Activity with the same name is registered. An Activity can be made dynamic by setting `Dynamic` as `true` on the `[Activity]` attribute. You must register the Activity with the Worker before it can be invoked. Only one Dynamic Activity can be present on a Worker. The Activity Definition must then accept a single argument of type `Temporalio.Converters.IRawValue[]`. The [PayloadConverter](https://dotnet.temporal.io/api/Temporalio.Activities.ActivityExecutionContext.html#Temporalio_Activities_ActivityExecutionContext_PayloadConverter) property on the `ActivityExecutionContext` is used to convert an `IRawValue` object to the desired type using extension methods in the `Temporalio.Converters` namespace. ```csharp public class MyActivities { [Activity(Dynamic = true)] public string DynamicActivity(IRawValue[] args) { var input = ActivityExecutionContext.Current.PayloadConverter.ToValue(args.Single()); return $"{input.Greeting}, {input.Name}!"; } } ``` --- ## Debugging - .NET SDK ## Debugging {#debug} This page shows how to do the following: - [Debug in a development environment](#debug-in-a-development-environment) - [Debug in a development production](#debug-in-a-development-production) ### Debug in a development environment {#debug-in-a-development-environment} **How to debug in a development environment using the Temporal .NET SDK** In developing Workflows, you can use the normal development tools of logging and a debugger to see what’s happening in your Workflow. In addition to the normal development tools of logging and a debugger, you can also see what’s happening in your Workflow by using the [Web UI](/web-ui) or [Temporal CLI](/cli). The Web UI provides insight into your Workflows, making it easier to identify issues and monitor the state of your Workflows in real time. ### Debug in a development production {#debug-in-a-development-production} **How to debug in a development production using the Temporal .NET SDK** You can debug production Workflows using: - [Web UI](/web-ui) - [Temporal CLI](/cli) - [Replay](/develop/dotnet/testing-suite#replay) - [Tracing](/develop/dotnet/observability#tracing) - [Logging](/develop/dotnet/observability#logging) You can debug and tune Worker performance with metrics and the [Worker performance guide](/develop/worker-performance). For more information, see [Observability ▶️ Metrics](/develop/dotnet/observability#metrics) for setting up SDK metrics. Debug Server performance with [Cloud metrics](/cloud/metrics/) or [self-hosted Server metrics](/self-hosted-guide/production-checklist#scaling-and-metrics). --- ## Durable Timers - .NET SDK This page describes how to set a Durable Timer using the Temporal .NET SDK. A [Durable Timer](/workflow-execution/timers-delays) is used to pause the execution of a Workflow for a specified duration. A Workflow can sleep for days or even months. Timers are persisted, so even if your Worker or Temporal Service is down when the time period completes, as soon as your Worker and Temporal Service are back up, the Durable Timer call will resolve and your code will continue executing. Sleeping is a resource-light operation: it does not tie up the process, and you can run millions of Timers off a single Worker. To add a Timer in a Workflow, use `Workflow.DelayAsync`. This is like a deterministic form of `Task.Delay`. ```csharp // Sleep for 3 days await Workflow.DelayAsync(TimeSpan.FromDays(3)); ``` --- ## Enriching the User Interface - .NET SDK Temporal supports adding context to Workflows and events with metadata. This helps users identify and understand Workflows and their operations. ## Adding Summary and Details to Workflows ### Starting a Workflow When starting a Workflow, you can provide a static summary and details to help identify the Workflow in the UI: ```csharp using Temporalio.Client; // Create client var client = await TemporalClient.ConnectAsync(new("localhost:7233")); // Start a Workflow with static summary and details var handle = await client.StartWorkflowAsync( (YourWorkflow wf) => wf.RunAsync("Workflow input"), new WorkflowOptions { Id = "your-Workflow-id", TaskQueue = "your-task-queue", StaticSummary = "Order processing for customer #12345", StaticDetails = "Processing premium order with expedited shipping" }); ``` `StaticSummary` is a single-line description that appears in the Workflow list view, limited to 200 bytes. `StaticDetails` can be multi-line and provides more comprehensive information that appears in the Workflow details view, with a larger limit of 20K bytes. The input format is standard Markdown excluding images, HTML, and scripts. You can also use the `ExecuteWorkflowAsync` method with the same parameters: ```csharp var result = await client.ExecuteWorkflowAsync( (YourWorkflow wf) => wf.RunAsync("Workflow input"), new WorkflowOptions { Id = "your-Workflow-id", TaskQueue = "your-task-queue", StaticSummary = "Order processing for customer #12345", StaticDetails = "Processing premium order with expedited shipping" }); ``` ### Inside the Workflow Within a Workflow, you can get and set the _current Workflow details_. Unlike static summary/details set at Workflow start, this value can be updated throughout the life of the Workflow. Current Workflow details also takes Markdown format (excluding images, HTML, and scripts) and can span multiple lines. ```csharp using Temporalio.Workflows; [Workflow] public class YourWorkflow { [WorkflowRun] public async Task RunAsync(string input) { // Get the current details var currentDetails = Workflow.CurrentDetails; Workflow.Logger.LogInformation($"Current details: {currentDetails}"); // Set/update the current details Workflow.CurrentDetails = "Updated Workflow details with new status"; return "Workflow completed"; } } ``` ### Adding Summary to Activities and Timers You can attach a metadata parameter `Summary` to Activities when starting them from within a Workflow: ```csharp using Temporalio.Activities; using Temporalio.Workflows; [Workflow] public class YourWorkflow { [WorkflowRun] public async Task RunAsync(string input) { // Execute an activity with a summary var result = await Workflow.ExecuteActivityAsync( (YourActivities act) => act.YourActivityAsync(input), new ActivityOptions { StartToCloseTimeout = TimeSpan.FromSeconds(10), Summary = "Processing user data" }); return result; } } ``` Similarly, you can attach a `Summary` to timers within a Workflow: ```csharp using Temporalio.Workflows; [Workflow] public class YourWorkflow { [WorkflowRun] public async Task RunAsync(string input) { // Create a timer with a summary await Workflow.DelayWithOptionsAsync(new DelayOptions(TimeSpan.FromMinutes(5)) { Summary = "Waiting for payment confirmation" }); return "Timer completed"; } } ``` The input format for `Summary` is a string, and limited to 200 bytes. ## Viewing Summary and Details in the UI Once you've added summaries and details to your Workflows, Activities, and timers, you can view this enriched information in the Temporal Web UI. Navigate to your Workflow's details page to see the metadata displayed in two key locations: ### Workflow Overview Section At the top of the Workflow details page, you'll find the Workflow-level metadata: - **Summary & Details** - Displays the static summary and static details set when starting the Workflow - **Current Details** - Displays the dynamic details that can be updated during Workflow execution All Workflow details support standard Markdown formatting (excluding images, HTML, and scripts), allowing you to create rich, structured information displays. ### Event History Individual events in the Workflow's Event History display their associated summaries when available. Workflow, Activity and Timer summaries appear in purple text next to their corresponding events, providing immediate context without requiring you to expand the Event details. When you do expand an Event, the summary is also prominently displayed in the detailed view. --- ## Failure detection - .NET SDK This page shows how to do the following: - [Raise and Handle Exceptions](#exception-handling) - [Deliberately Fail Workflows](#workflow-failure) - [Set Workflow timeouts](#workflow-timeouts) - [Set Workflow retries](#workflow-retries) - [Set Activity timeouts](#activity-timeouts) - [Set Activity Retry Policy](#activity-retries) - [Heartbeat an Activity](#activity-heartbeats) - [Set Heartbeat timeouts](#heartbeat-timeout) ## Raise and Handle Exceptions {#exception-handling} In each Temporal SDK, error handling is implemented idiomatically, following the conventions of the language. Temporal uses several different error classes internally — for example, [`CancelledFailureException`](https://dotnet.temporal.io/api/Temporalio.Exceptions.CanceledFailureException.html) in the .NET SDK, to handle a Workflow cancellation. You should not raise or otherwise implement these manually, as they are tied to Temporal platform logic. The one Temporal error class that you will typically raise deliberately is [`ApplicationFailureException`](https://dotnet.temporal.io/api/Temporalio.Exceptions.ApplicationFailureException.html). In fact, *any* other exceptions that are raised from your C# code in a Temporal Activity will be converted to an `ApplicationFailureException` internally. This way, an error's type, severity, and any additional details can be sent to the Temporal Service, indexed by the Web UI, and even serialized across language boundaries. In other words, these two code samples do the same thing: ```csharp [Serializable] public class InvalidDepartmentException : Exception { public InvalidDepartmentException() : base() { } public InvalidDepartmentException(string message) : base(message) { } public InvalidDepartmentException(string message, Exception inner) : base(message, inner) { } } [Activity] public Task SendBillAsync(Bill bill) { throw new InvalidDepartmentException("Invalid department"); } ``` ```csharp [Activity] public Task SendBillAsync(Bill bill) { throw new ApplicationFailureException("Invalid department", errorType: "InvalidDepartmentException"); } ``` Depending on your implementation, you may decide to use either method. One reason to use the Temporal `ApplicationFailureException` class is because it allows you to set an additional `non_retryable` parameter. This way, you can decide whether an error should not be retried automatically by Temporal. This can be useful for deliberately failing a Workflow due to bad input data, rather than waiting for a timeout to elapse: ```csharp [Activity] public Task SendBillAsync(Bill bill) { throw new ApplicationFailureException("Invalid department", nonRetryable: true); } ``` You can alternately specify a list of errors that are non-retryable in your Activity [Retry Policy](#activity-retries). ## Failing Workflows {#workflow-failure} One of the core design principles of Temporal is that an Activity Failure will never directly cause a Workflow Failure — a Workflow should never return as Failed unless deliberately. The default retry policy associated with Temporal Activities is to retry them until reaching a certain timeout threshold. Activities will not actually *return* a failure to your Workflow until this condition or another non-retryable condition is met. At this point, you can decide how to handle an error returned by your Activity the way you would in any other program. For example, you could implement a [Saga Pattern](https://github.com/temporalio/samples-dotnet/tree/main/src/Saga) that uses `try`/`catch` blocks to "unwind" some of the steps your Workflow has performed up to the point of Activity Failure. **You will only fail a Workflow by manually raising an `ApplicationFailureException` from the Workflow code.** You could do this in response to an Activity Failure, if the failure of that Activity means that your Workflow should not continue: ```csharp try { await Workflow.ExecuteActivityAsync( (Activities act) => act.ValidateCreditCardAsync(order.Customer.CreditCardNumber), options); } catch (ActivityFailureException err) { logger.LogError("Unable to process credit card: {Message}", err.Message); throw new ApplicationFailureException(message: "Invalid credit card number error"); } ``` This works differently in a Workflow than raising exceptions from Activities. In an Activity, any C# exceptions or custom exceptions are converted to a Temporal `ApplicationError`. In a Workflow, any exceptions that are raised other than an explicit Temporal `ApplicationError` will only fail that particular [Workflow Task](https://docs.temporal.io/tasks#workflow-task-execution) and be retried. This includes any typical C# `RuntimeError`s that are raised automatically. These errors are treated as bugs that can be corrected with a fixed deployment, rather than a reason for a Temporal Workflow Execution to return unexpectedly. ## Workflow timeouts {#workflow-timeouts} **How to set Workflow timeouts using the Temporal .NET SDK** Each Workflow timeout controls the maximum duration of a different aspect of a Workflow Execution. Workflow timeouts are set when [starting the Workflow Execution](#workflow-timeouts). - **[Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout)** - restricts the maximum amount of time that a single Workflow Execution can be executed. - **[Workflow Run Timeout](/encyclopedia/detecting-workflow-failures#workflow-run-timeout):** restricts the maximum amount of time that a single Workflow Run can last. - **[Workflow Task Timeout](/encyclopedia/detecting-workflow-failures#workflow-task-timeout):** restricts the maximum amount of time that a Worker can execute a Workflow Task. These values can be set in the `WorkflowOptions` when calling `StartWorkflowAsync` or `ExecuteWorkflowAsync`. Available timeouts are: - ExecutionTimeout - RunTimeout - TaskTimeout ```csharp var result = await client.ExecuteWorkflowAsync( (MyWorkflow wf) => wf.RunAsync(), new(id: "my-workflow-id", taskQueue: "my-task-queue") { WorkflowExecutionTimeout = TimeSpan.FromMinutes(5), }); ``` ### Set Workflow retries {#workflow-retries} **How to set Workflow retries using the Temporal .NET SDK** A Retry Policy can work in cooperation with the timeouts to provide fine controls to optimize the execution experience. Use a [Retry Policy](/encyclopedia/retry-policies) to retry a Workflow Execution in the event of a failure. Workflow Executions do not retry by default, and Retry Policies should be used with Workflow Executions only in certain situations. The `RetryPolicy` can be set in the `WorkflowOptions` when calling `StartWorkflowAsync` or `ExecuteWorkflowAsync`. ```csharp var result = await client.ExecuteWorkflowAsync( (MyWorkflow wf) => wf.RunAsync(), new(id: "my-workflow-id", taskQueue: "my-task-queue") { RetryPolicy = new() { MaximumInterval = TimeSpan.FromSeconds(10) }, }); ``` ## Activity Timeouts {#activity-timeouts} **How to set Activity Timeouts using the Temporal .NET SDK** Each Activity Timeout controls the maximum duration of a different aspect of an Activity Execution. The following Timeouts are available in the Activity Options. - **[Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout):** is the maximum amount of time allowed for the overall [Activity Execution](/activity-execution). - **[Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout):** is the maximum time allowed for a single [Activity Task Execution](/tasks#activity-task-execution). - **[Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout):** is the maximum amount of time that is allowed from when an [Activity Task](/tasks#activity-task) is scheduled to when a [Worker](/workers#worker) starts that Activity Task. An Activity Execution must have either the Start-To-Close or the Schedule-To-Close Timeout set. These values can be set in the `ActivityOptions` when calling `ExecuteActivityAsync`. Available timeouts are: - ScheduleToCloseTimeout - ScheduleToStartTimeout - StartToCloseTimeout ```csharp return await Workflow.ExecuteActivityAsync( (MyActivities a) => a.MyActivity(param), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) }); ``` ### Set an Activity Retry Policy {#activity-retries} **How to an Activity Retry Policy using the Temporal .NET SDK** A Retry Policy works in cooperation with the timeouts to provide fine controls to optimize the execution experience. Activity Executions are automatically associated with a default [Retry Policy](/encyclopedia/retry-policies) if a custom one is not provided. To create an Activity Retry Policy in .NET, set the `RetryPolicy` on the `ActivityOptions` when calling `ExecuteActivityAsync`. ```csharp return await Workflow.ExecuteActivityAsync( (MyActivities a) => a.MyActivity(param), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5), RetryPolicy = new() { MaximumInterval = TimeSpan.FromSeconds(10) }, }); ``` ### Override the Retry interval with `nextRetryDelay` {#next-retry-delay} When you throw an [Application Failure](/references/failures#application-failure) and assign the `nextRetryDelay` field, its value replaces and overrides the Retry interval defined in the active Retry Policy. For example, you might scale the next Retry delay interval based on the current number of attempts. Here's how you'd do that in an Activity. In the following sample, the `attempt` count is retrieved from the Activity Execution context and used to set the number of seconds for the next Retry delay: ```csharp var attempt = ActivityExecutionContext.Current.Info.Attempt; throw new ApplicationFailureException( $"Something bad happened on attempt {attempt}", errorType: "my_failure_type", nextRetryDelay: TimeSpan.FromSeconds(3 * attempt)); ``` ## Heartbeat an Activity {#activity-heartbeats} **How to Heartbeat an Activity using the Temporal .NET SDK** An [Activity Heartbeat](/encyclopedia/detecting-activity-failures#activity-heartbeat) is a ping from the [Worker Process](/workers#worker-process) that is executing the Activity to the [Temporal Service](/temporal-service). Each Heartbeat informs the Temporal Service that the [Activity Execution](/activity-execution) is making progress and the Worker has not crashed. If the Temporal Service does not receive a Heartbeat within a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) time period, the Activity will be considered failed and another [Activity Task Execution](/tasks#activity-task-execution) may be scheduled according to the Retry Policy. Heartbeats may not always be sent to the Temporal Service—they may be [throttled](/encyclopedia/detecting-activity-failures#throttling) by the Worker. Activity Cancellations are delivered to Activities from the Temporal Service when they Heartbeat. Activities that don't Heartbeat can't receive a Cancellation. Heartbeat throttling may lead to Cancellation getting delivered later than expected. Heartbeats can contain a `Details` field describing the Activity's current progress. If an Activity gets retried, the Activity can access the `Details` from the last Heartbeat that was sent to the Temporal Service. To Heartbeat an Activity Execution in .NET, use the [`Heartbeat()`](https://dotnet.temporal.io/api/Temporalio.Activities.ActivityExecutionContext.html#Temporalio_Activities_ActivityExecutionContext_Heartbeat_System_Object___) method on the `ActivityExecutionContext`. ```csharp [Activity] public async Task MyActivityAsync() { while (true) { // Send heartbeat ActivityExecutionContext.Current.Heartbeat(); // Do some work, passing the cancellation token await Task.Delay(1000, ActivityExecutionContext.Current.CancellationToken); } } ``` In addition to obtaining cancellation information, Heartbeats also support detail data that persists on the server for retrieval during Activity retry. If an Activity calls `Heartbeat(123, 456)` and then fails and is retried, `HeartbeatDetails` on the `ActivityInfo` returns an collection containing `123` and `456` on the next Run. ### Set a Heartbeat Timeout {#heartbeat-timeout} **How to set a Heartbeat Timeout using the Temporal .NET SDK** A [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) works in conjunction with [Activity Heartbeats](/encyclopedia/detecting-activity-failures#activity-heartbeat). `HeartbeatTimeout` is a property on `ActivityOptions` for `ExecuteActivityAsync` used to set the maximum time between Activity Heartbeats. ```csharp await Workflow.ExecuteActivityAsync( (MyActivities a) => a.MyActivity(param), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5), HeartbeatTimeout = TimeSpan.FromSeconds(30), }); ``` --- ## .Net SDK developer guide ![.NET SDK Banner](/img/assets/banner-dotnet-temporal.png) :::info .NET SPECIFIC RESOURCES Build Temporal Applications with the .NET SDK. **Temporal .NET Technical Resources:** - [.NET Quickstart](https://docs.temporal.io/develop/dotnet/set-up-your-local-dotnet) - [.NET API Documentation](https://dotnet.temporal.io/api/) - [.NET SDK Code Samples](https://github.com/temporalio/samples-dotnet) - [.NET SDK GitHub](https://github.com/temporalio/sdk-dotnet) - [Temporal 101 in .NET Free Course](https://learn.temporal.io/courses/temporal_101/dotnet/) **Get Connected with the Temporal .NET Community:** - [Temporal .NET Community Slack](https://temporalio.slack.com/archives/C012SHMPDDZ) - [.NET SDK Forum](https://community.temporal.io/tag/dotnet-sdk) ::: ## [Core Application](/develop/dotnet/core-application) Use the essential components of a Temporal Application (Workflows, Activities, and Workers) to build and run a Temporal application. - [Develop a basic Workflow Definition](/develop/dotnet/core-application#develop-workflow): Workflows are the fundamental unit of a Temporal Application, and it all starts with the development of a Workflow Definition. - [Develop a basic Activity Definition](/develop/dotnet/core-application#develop-activity): One of the primary things that Workflows do is orchestrate the execution of Activities. - [Start an Activity from a Workflow](/develop/dotnet/core-application#activity-execution): Calls to spawn Activity Executions are written within a Workflow Definition. - [Run a Worker Process](/develop/dotnet/core-application#run-worker-process): The Worker Process is where Workflow Functions and Activity Functions are executed. - [Set a Dynamic Workflow](/develop/dotnet/core-application#set-a-dynamic-workflow): Set a Workflow that can be invoked dynamically at runtime. - [Set a Dynamic Activity](/develop/dotnet/core-application#set-a-dynamic-activity): Set an Activity that can be invoked dynamically at runtime. ## [Temporal Client](/develop/dotnet/temporal-client) Connect to a Temporal Service and start a Workflow Execution. - [Create a Temporal Client](/develop/dotnet/temporal-client#connect-to-development-service): Instantiate and configure a client to interact with the Temporal Service. - [Connect to Temporal Cloud](/develop/dotnet/temporal-client#connect-to-temporal-cloud): Securely connect to the Temporal Cloud for a fully managed service. - [Start a Workflow](/develop/dotnet/temporal-client#start-workflow): Initiate Workflows seamlessly via the .NET SDK. - [Get Workflow results](/develop/dotnet/temporal-client#get-workflow-results): Retrieve and process the results of your Workflows efficiently. ## [Testing](/develop/dotnet/testing-suite) Set up the testing suite and test Workflows and Activities. - [Test frameworks](/develop/dotnet/testing-suite#test-frameworks): Testing provides a framework to facilitate Workflow and integration testing. - [Testing Workflows](/develop/dotnet/testing-suite#testing-workflows): Ensure the functionality and reliability of your Workflows. - [Testing Activities](/develop/dotnet/testing-suite#test-activities): Validate the execution and outcomes of your Activities. - [Replay test](/develop/dotnet/testing-suite#replay): Replay recreates the exact state of a Workflow Execution. ## [Failure detection](/develop/dotnet/failure-detection) Explore how your application can detect failures using timeouts and automatically attempt to mitigate them with retries. - [Workflow timeouts](/develop/dotnet/failure-detection#workflow-timeouts): Each Workflow timeout controls the maximum duration of a different aspect of a Workflow Execution. - [Workflow retries](/develop/dotnet/failure-detection#workflow-retries): A Workflow Retry Policy can be used to retry a Workflow Execution in the event of a failure. - [Activity timeouts](/develop/dotnet/failure-detection#activity-timeouts): Each Activity timeout controls the maximum duration of a different aspect of an Activity Execution. - [Set an Activity Retry Policy](/develop/dotnet/failure-detection#activity-retries): Define retry logic for Activities to handle failures. - [Heartbeat an Activity](/develop/dotnet/failure-detection#activity-heartbeats): An Activity Heartbeat is a ping from the Worker that is executing the Activity to the Temporal Service. - [Heartbeat Timeout](/develop/dotnet/failure-detection#heartbeat-timeout): A Heartbeat Timeout works in conjunction with Activity Heartbeats. ## [Workflow message passing](/develop/dotnet/message-passing) Send messages to and read the state of Workflow Executions. ### Signals - [Define Signal](/develop/dotnet/message-passing#signals): A Signal is a message sent to a running Workflow Execution. - [Send a Signal from a Temporal Client](/develop/dotnet/message-passing#send-signal-from-client): Send a Signal to a Workflow from a Temporal Client. - [Send a Signal from a Workflow](/develop/dotnet/message-passing#send-signal-from-workflow): Send a Signal to another Workflow from within a Workflow, this would also be called an External Signal. - [Signal-With-Start](/develop/dotnet/message-passing#signal-with-start): Start a Workflow and send it a Signal in a single operation used from the Client. - [Dynamic Handler](/develop/dotnet/message-passing#dynamic-handler): Dynamic Handlers provide flexibility to handle cases where the names of Workflows, Activities, Signals, or Queries aren't known at run time. - [Set a Dynamic Signal](/develop/dotnet/message-passing#set-a-dynamic-signal): A Dynamic Signal in Temporal is a Signal that is invoked dynamically at runtime if no other Signal with the same input is registered. ### Queries - [Define a Query](/develop/dotnet/message-passing#queries): A Query is a synchronous operation that is used to get the state of a Workflow Execution. - [Send Queries](/develop/dotnet/message-passing#send-query): Queries are sent from the Temporal Client. - [Set a Dynamic Query](/develop/dotnet/message-passing#set-a-dynamic-signal): A Dynamic Query in Temporal is a Query that is invoked dynamically at runtime if no other Query with the same name is registered. ### Updates - [Define an Update](/develop/dotnet/message-passing#updates): An Update is an operation that can mutate the state of a Workflow Execution and return a response. - [Send an Update](/develop/dotnet/message-passing#send-update-from-client): An Update is sent from the Temporal Client. ## [Interrupt a Workflow](/develop/dotnet/cancellation) Interrupt a Workflow Execution with a Cancel or Terminate action. - [Cancel a Workflow](/develop/dotnet/cancellation#cancellation): Interrupt a Workflow Execution and its Activities through Workflow cancellation. - [Terminate a Workflow](/develop/dotnet/cancellation#termination): Interrupt a Workflow Execution and its Activities through Workflow termination. - [Reset a Workflow](/develop/dotnet/cancellation#reset): Resume a Workflow Execution from an earlier point in its Event History. ## [Asynchronous Activity completion](/develop/dotnet/asynchronous-activity) Complete Activities asynchronously. - [Asynchronous Activity](/develop/dotnet/asynchronous-activity): Asynchronous Activity completion enables the Activity Function to return without the Activity Execution completing. ## [Versioning](/develop/dotnet/versioning) Change Workflow Definitions without causing non-deterministic behavior in running Workflows. - [Use the .NET SDK Patching API](/develop/dotnet/versioning#patching): Patching Workflows using the .NET SDK. ## [Observability](/develop/dotnet/observability) Configure and use the Temporal Observability APIs. - [Emit Metrics](/develop/dotnet/observability#metrics): Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. - [Set up Tracing](/develop/dotnet/observability#tracing): Explains how the Go SDK supports tracing and custom context propagation. - [Log from a Workflow](/develop/dotnet/observability#logging): Send logs and errors to a logging service, so that when things go wrong, you can see what happened. - [Use Visibility APIs](/develop/dotnet/observability#visibility): The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Terminal Service. ## [Debugging](/develop/dotnet/debugging) Explore various ways to debug your application. - [Debug in a development environment](/develop/dotnet/debugging#debug-in-a-development-environment): In addition to the normal development tools of logging and a debugger, you can also see what’s happening in your Workflow by using the Web UI and the Temporal CLI. - [Debug in a development production](/develop/dotnet/debugging#debug-in-a-development-production): Debug production Workflows using the Web UI, the Temporal CLI, Replays, Tracing, or Logging. ## [Schedules](/develop/dotnet/schedules) Run Workflows on a schedule and delay the start of a Workflow. - [Schedule a Workflow](/develop/dotnet/schedules#schedule-a-workflow) - [Create a Scheduled Workflow](/develop/dotnet/schedules#create-a-workflow): Create a new schedule for a scheduled Workflow. - [Backfill a Scheduled Workflow](/develop/dotnet/schedules#backfill-a-scheduled-workflow): Backfills a past time range of actions for a scheduled Workflow. - [Delete a Scheduled Workflow](/develop/dotnet/schedules#delete-a-scheduled-workflow): Deletes a schedule for a scheduled Workflow. - [Describe a Scheduled Workflow](/develop/dotnet/schedules#describe-a-scheduled-workflow): Get schedule configuration and current state for a scheduled Workflow. - [List a Scheduled Workflow](/develop/dotnet/schedules#list-a-scheduled-workflow): List a schedule for a scheduled Workflow. - [Pause a Scheduled Workflow](/develop/dotnet/schedules#pause-a-scheduled-workflow): Pause a schedule for a scheduled Workflow. - [Trigger a Scheduled Workflow](/develop/dotnet/schedules#trigger-a-scheduled-workflow): Triggers an immediate action for a scheduled Workflow. - [Update a Scheduled Workflow](/develop/dotnet/schedules#update-a-scheduled-workflow): Updates a schedule with a new definition for a scheduled Workflow. - [Use Start Delay](/develop/dotnet/schedules#start-delay): Start delay functionality if you need to delay the execution of the Workflow without the need for regular launches. ## [Data encryption](/develop/dotnet/converters-and-encryption) Use compression, encryption, and other data handling by implementing custom converters and codecs. - [Use a custom Payload Codec](/develop/dotnet/converters-and-encryption#custom-payload-codec): Create a custom PayloadCodec implementation and define your encryption/compression and decryption/decompression logic. - [Use a custom Payload Converter](/develop/dotnet/converters-and-encryption#custom-payload-converter): A custom data converter can be set via the `DataConverter` option when creating a client. ## [Durable Timers](/develop/dotnet/durable-timers) Use Timers to make a Workflow Execution pause or "sleep" for seconds, minutes, days, months, or years. - [Sleep](/develop/dotnet/durable-timers): A Timer lets a Workflow sleep for a fixed time period. ## Temporal Nexus The [Temporal Nexus](/develop/dotnet/nexus) feature guide shows how to use Temporal Nexus to connect Durable Executions within and across Namespaces using a Nexus Endpoint, a Nexus Service contract, and Nexus Operations. - [Create a Nexus Endpoint to route requests from caller to handler](/develop/dotnet/nexus#create-nexus-endpoint) - [Define the Nexus Service contract](/develop/dotnet/nexus#define-nexus-service-contract) - [Develop a Nexus Service and Operation handlers](/develop/dotnet/nexus#develop-nexus-service-operation-handlers) - [Develop a caller Workflow that uses a Nexus Service](/develop/dotnet/nexus#develop-caller-workflow-nexus-service) - [Make Nexus calls across Namespaces with a development Server](/develop/dotnet/nexus#nexus-calls-across-namespaces-dev-server) - [Make Nexus calls across Namespaces in Temporal Cloud](/develop/dotnet/nexus#nexus-calls-across-namespaces-temporal-cloud) ## [Child Workflows](/develop/dotnet/child-workflows) Explore how to spawn a Child Workflow Execution and handle Child Workflow Events. - [Start a Child Workflow Execution](/develop/dotnet/child-workflows): A Child Workflow Execution is a Workflow Execution that is scheduled from within another Workflow using a Child Workflow API. - [Set a Parent Close Policy](/develop/dotnet/child-workflows#parent-close-policy): A Parent Close Policy determines what happens to a Child Workflow Execution if its Parent changes to a Closed status. ## [Continue-As-New](/develop/dotnet/continue-as-new) Continue the Workflow Execution with a new Workflow Execution using the same Workflow ID. - [Continue-As-New](/develop/dotnet/continue-as-new): Continue-As-New enables a Workflow Execution to close successfully and create a new Workflow Execution in a single atomic operation if the number of Events in the Event History is becoming too large. ## [Enriching the User Interface](/develop/dotnet/enriching-ui) Add descriptive information to workflows and events for better visibility and context in the UI. - [Adding Summary and Details to Workflows](/develop/dotnet/enriching-ui#adding-summary-and-details-to-workflows) --- ## Workflow message passing - .NET SDK A Workflow can act like a stateful web service that receives messages: Queries, Signals, and Updates. The Workflow implementation defines these endpoints via handler methods that can react to incoming messages and return values. Temporal Clients use messages to read Workflow state and control execution. See [Workflow message passing](/encyclopedia/workflow-message-passing) for a general overview of this topic. This page introduces these features for the Temporal .NET SDK. ## Write message handlers {#writing-message-handlers} :::info The code that follows is part of a [working solution](https://github.com/temporalio/samples-dotnet/tree/main/src/MessagePassing). ::: Follow these guidelines when writing your message handlers: - Message handlers are defined as methods on the Workflow class, using one of the three attributes: [`WorkflowQueryAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowQueryAttribute.html), [`WorkflowSignalAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowSignalAttribute.html), and [`WorkflowUpdateAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowUpdateAttribute.html). - The parameters and return values of handlers and the main Workflow function must be [serializable](/dataconversion). - Prefer data classes to multiple input parameters. Data class parameters allow you to add fields without changing the calling signature. ### Query handlers {#queries} A [Query](/sending-messages#sending-queries) is a synchronous operation that retrieves state from a Workflow Execution. Define as a method: ```csharp [Workflow] public class GreetingWorkflow { public enum Language { Chinese, English, French, Spanish, Portuguese, } public record GetLanguagesInput(bool IncludeUnsupported); // ... [WorkflowQuery] public IList GetLanguages(GetLanguagesInput input) => Enum.GetValues(). Where(language => input.IncludeUnsupported || Greetings.ContainsKey(language)). ToList(); // ... ``` Or as a property getter: ```csharp [Workflow] public class GreetingWorkflow { public enum Language { Chinese, English, French, Spanish, Portuguese, } // ... [WorkflowQuery] public Language CurrentLanguage { get; private set; } = Language.English; // ... ``` - The Query attribute can accept arguments. See the API reference docs: [`WorkflowQueryAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowQueryAttribute.html). - A Query handler must not modify Workflow state. - You can't perform async blocking operations such as executing an Activity in a Query handler. ### Signal handlers {#signals} A [Signal](/sending-messages#sending-signals) is an asynchronous message sent to a running Workflow Execution to change its state and control its flow: ```csharp [Workflow] public class GreetingWorkflow { public record ApproveInput(string Name); // ... [WorkflowSignal] public async Task ApproveAsync(ApproveInput input) { approvedForRelease = true; approverName = input.Name; } // ... ``` - The Signal attribute can accept arguments. Refer to the API docs: [`WorkflowSignalAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowSignalAttribute.html). - The handler should not return a value. The response is sent immediately from the server, without waiting for the Workflow to process the Signal. - Signal (and Update) handlers can be asynchronous and blocking. This allows you to use Activities, Child Workflows, durable [`Workflow.DelayAsync`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html?#Temporalio_Workflows_Workflow_DelayAsync_System_Int32_System_Nullable_System_Threading_CancellationToken__) Timers, [`Workflow.WaitConditionAsync`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html?#Temporalio_Workflows_Workflow_WaitConditionAsync_System_Func_System_Boolean__System_Int32_System_Nullable_System_Threading_CancellationToken__) conditions, and more. See [Async handlers](#async-handlers) and [Workflow message passing](/encyclopedia/workflow-message-passing) for guidelines on safely using async Signal and Update handlers. ### Update handlers and validators {#updates} An [Update](/sending-messages#sending-updates) is a trackable synchronous request sent to a running Workflow Execution. It can change the Workflow state, control its flow, and return a result. The sender must wait until the Worker accepts or rejects the Update. The sender may wait further to receive a returned value or an exception if something goes wrong: ```csharp [Workflow] public class GreetingWorkflow { public enum Language { Chinese, English, French, Spanish, Portuguese, } // ... [WorkflowUpdateValidator(nameof(SetCurrentLanguageAsync))] public void ValidateLanguage(Language language) { if (!Greetings.ContainsKey(language)) { throw new ApplicationFailureException($"{language} is not supported"); } } [WorkflowUpdate] public async Task SetCurrentLanguageAsync(Language language) { var previousLanguage = CurrentLanguage; CurrentLanguage = language; return previousLanguage; } // ... ``` - The Update attribute can take arguments (like, `Name`, `Dynamic` and `UnfinishedPolicy`) as described in the API reference docs for [`WorkflowUpdateAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowUpdateAttribute.html). - About validators: - Use validators to reject an Update before it is written to History. Validators are always optional. If you don't need to reject Updates, you can skip them. - Define an Update validator with the [`WorkflowUpdateValidatorAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowUpdateValidatorAttribute.html) attribute. Use the Name argument when declaring the validator to connect it to its Update. The validator must be a `void` type and accept the same argument types as the handler. - Accepting and rejecting Updates with validators: - To reject an Update, raise an exception of any type in the validator. - Without a validator, Updates are always accepted. - Validators and Event History: - The `WorkflowExecutionUpdateAccepted` event is written into the History whether the acceptance was automatic or programmatic. - When a Validator raises an error, the Update is rejected, the Update is not run, and `WorkflowExecutionUpdateAccepted` _won't_ be added to the Event History. The caller receives an "Update failed" error. - Use [`Workflow.CurrentUpdateInfo`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_CurrentUpdateInfo) to obtain information about the current Update. This includes the Update ID, which can be useful for deduplication when using Continue-As-New: see [Ensuring your messages are processed exactly once](/handling-messages#exactly-once-message-processing). - Update (and Signal) handlers can be asynchronous and blocking. This allows you to use Activities, Child Workflows, durable [`Workflow.DelayAsync`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html?#Temporalio_Workflows_Workflow_DelayAsync_System_Int32_System_Nullable_System_Threading_CancellationToken__) Timers, [`Workflow.WaitConditionAsync`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html?#Temporalio_Workflows_Workflow_WaitConditionAsync_System_Func_System_Boolean__System_Int32_System_Nullable_System_Threading_CancellationToken__) conditions, and more. See [Async handlers](#async-handlers) and [Workflow message passing](/encyclopedia/workflow-message-passing) for guidelines on safely using async Update and Signal handlers. ## Send messages {#send-messages} To send Queries, Signals, or Updates you call methods on a [`WorkflowHandle`](https://dotnet.temporal.io/api/Temporalio.Client.WorkflowHandle.html) object. To obtain the WorkflowStub, you can: - Use [`TemporalClient.StartWorkflowAsync`](https://dotnet.temporal.io/api/Temporalio.Client.TemporalClient.html#Temporalio_Client_TemporalClient_StartWorkflowAsync_System_String_System_Collections_Generic_IReadOnlyCollection_System_Object__Temporalio_Client_WorkflowOptions_) to start a Workflow and return its handle. - Use the [`TemporalClient.GetWorkflowHandle`](https://dotnet.temporal.io/api/Temporalio.Client.TemporalClient.html#Temporalio_Client_TemporalClient_GetWorkflowHandle_System_String_System_String_System_String_) method to retrieve a Workflow handle by its Workflow Id. For example: ```csharp var client = await TemporalClient.ConnectAsync(new("localhost:7233")); var workflowHandle = await client.StartWorkflowAsync( (GreetingWorkflow wf) => wf.RunAsync(), new(id: "message-passing-workflow-id", taskQueue: "message-passing-sample")); ``` To check the argument types required when sending messages -- and the return type for Queries and Updates -- refer to the corresponding handler method in the Workflow Definition. :::warning Using Continue-as-New and Updates - Temporal _does not_ support Continue-as-New functionality within Update handlers. - Complete all handlers _before_ using Continue-as-New. - Use Continue-as-New from your main Workflow Definition method, just as you would complete or fail a Workflow Execution. ::: ### Send a Query {#send-query} Call a Query method with [`WorkflowHandle.QueryAsync`](https://dotnet.temporal.io/api/Temporalio.Client.WorkflowHandle.html#Temporalio_Client_WorkflowHandle_QueryAsync__1_System_String_System_Collections_Generic_IReadOnlyCollection_System_Object__Temporalio_Client_WorkflowQueryOptions_): ```csharp var supportedLanguages = await workflowHandle.QueryAsync(wf => wf.GetLanguages(new(false))); ``` - Sending a Query doesn’t add events to a Workflow's Event History. - You can send Queries to closed Workflow Executions within a Namespace's Workflow retention period. This includes Workflows that have completed, failed, or timed out. Querying terminated Workflows is not safe and, therefore, not supported. - A Worker must be online and polling the Task Queue to process a Query. ### Send a Signal {#send-signal} You can send a Signal to a Workflow Execution from a Temporal Client or from another Workflow Execution. However, you can only send Signals to Workflow Executions that haven’t closed. #### Send a Signal from a Client {#send-signal-from-client} Use [`WorkflowHandle.SignalAsync`](https://dotnet.temporal.io/api/Temporalio.Client.WorkflowHandle.html#Temporalio_Client_WorkflowHandle_SignalAsync_System_String_System_Collections_Generic_IReadOnlyCollection_System_Object__Temporalio_Client_WorkflowSignalOptions_) from Client code to send a Signal: ```csharp await workflowHandle.SignalAsync(wf => wf.ApproveAsync(new("MyUser"))); ``` - The call returns when the server accepts the Signal; it does _not_ wait for the Signal to be delivered to the Workflow Execution. - The [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the Workflow's Event History. #### Send a Signal from a Workflow {#send-signal-from-workflow} A Workflow can send a Signal to another Workflow, known as an _External Signal_. In this case you need to obtain a Workflow handle for the external Workflow. Use `Workflow.GetExternalWorkflowHandle`, passing a running Workflow Id, to retrieve a typed Workflow handle: ```csharp // ... [Workflow] public class WorkflowB { [WorkflowRun] public async Task RunAsync() { var handle = Workflow.GetExternalWorkflowHandle("workflow-a"); await handle.SignalAsync(wf => wf.YourSignalAsync("signal argument")); } // ... ``` When an External Signal is sent: - A [SignalExternalWorkflowExecutionInitiated](/references/events#signalexternalworkflowexecutioninitiated) Event appears in the sender's Event History. - A [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the recipient's Event History. #### Signal-With-Start {#signal-with-start} Signal-With-Start allows a Client to send a Signal to a Workflow Execution, starting the Execution if it is not already running. If there's a Workflow running with the given Workflow Id, it will be signaled. If there isn't, a new Workflow will be started and immediately signaled. To use Signal-With-Start, call `SignalWithStart` with a lambda expression invoking it: ```csharp var client = await TemporalClient.ConnectAsync(new("localhost:7233")); var options = new WorkflowOptions(id: "your-signal-with-start-workflow", taskQueue: "signal-tq"); options.SignalWithStart((GreetingWorkflow wf) => wf.SubmitGreetingAsync("User Signal with Start")); await client.StartWorkflowAsync((GreetingWorkflow wf) => wf.RunAsync(), options); ``` ### Send an Update {#send-update-from-client} An Update is a synchronous, blocking call that can change Workflow state, control its flow, and return a result. A Client sending an Update must wait until the Server delivers the Update to a Worker. Workers must be available and responsive. If you need a response as soon as the Server receives the request, use a Signal instead. Also note that you can't send Updates to other Workflow Executions. - `WorkflowExecutionUpdateAccepted` is added to the Event History when the Worker confirms that the Update passed validation. - `WorkflowExecutionUpdateCompleted` is added to the Event History when the Worker confirms that the Update has finished. To send an Update to a Workflow Execution, you can: - Call the Update method with `ExecuteUpdateAsync` from the Client and wait for the Update to complete. This code fetches an Update result: ```csharp var previousLanguage = await workflowHandle.ExecuteUpdateAsync( wf => wf.SetCurrentLanguageAsync(GreetingWorkflow.Language.Chinese)); ``` - 2. Use `StartUpdateAsync` to receive a handle as soon as the Update is accepted. It returns an `UpdateHandle` - Use this `UpdateHandle` later to fetch your results. - Asynchronous Update handlers normally perform long-running async Activities. - `StartUpdateAsync` only waits until the Worker has accepted or rejected the Update, not until all asynchronous operations are complete. For example: ```csharp // Wait until the update is accepted var updateHandle = await workflowHandle.StartUpdateAsync( wf => wf.SetGreetingAsync(new HelloWorldInput("World")), new(waitForStage: WorkflowUpdateStage.Accepted)); // Wait until the update is completed var updateResult = await updateHandle.GetResultAsync(); ``` For more details, see the "Async handlers" section. #### Update-with-Start {#update-with-start} :::tip For open source server users, Temporal Server version [Temporal Server version 1.28](https://github.com/temporalio/temporal/releases/tag/v1.28.0) is recommended. ::: [Update-with-Start](/sending-messages#update-with-start) lets you [send an Update](/develop/dotnet/message-passing#send-update-from-client) that checks whether an already-running Workflow with that ID exists: - If the Workflow exists, the Update is processed. - If the Workflow does not exist, a new Workflow Execution is started with the given ID, and the Update is processed before the main Workflow method starts to execute. Use `ExecuteUpdateWithStartAsync` to start the Update and wait for the result in one go. Alternatively, use `StartUpdateWithStartAsync` to start the Update and receive a `WorkflowUpdateHandle`, and then use `await updateHandle.GetResultAsync()` to retrieve the result from the Update. These calls return once the requested Update wait stage has been reached, or when the request times out. - You will need to provide a `WithStartWorkflowOperation` to define the Workflow that will be started if necessary, and its arguments. - You must specify an [IdConflictPolicy](/workflow-execution/workflowid-runid#workflow-id-conflict-policy) when creating the `WithStartWorkflowOperation`. Note that a `WithStartWorkflowOperation` can only be used once. Here's an example taken from the [UpdateWithStartLazyInit](https://github.com/temporalio/samples-dotnet/blob/main/src/UpdateWithStartLazyInit/Program.cs) sample: ```csharp async Task AddCartItemAsync(string sessionId, ShoppingCartItem item) { // Issue an update-with-start that will create the workflow if it does not // exist before attempting the update // Create the start operation var startOperation = WithStartWorkflowOperation.Create( (ShoppingCartWorkflow wf) => wf.RunAsync(), new(id: $"cart-{sessionId}", taskQueue: TaskQueue) { IdConflictPolicy = Temporalio.Api.Enums.V1.WorkflowIdConflictPolicy.UseExisting, }); // Issue the update-with-start, swallowing item-unavailable failure decimal? subtotal; try { subtotal = await client.ExecuteUpdateWithStartWorkflowAsync( (ShoppingCartWorkflow wf) => wf.AddItemAsync(item), new(startOperation)); } catch (WorkflowUpdateFailedException e) when ( e.InnerException is ApplicationFailureException appErr && appErr.ErrorType == "ItemUnavailable") { // Set subtotal to null if item was not found subtotal = null; } return new(await startOperation.GetHandleAsync(), subtotal); } ``` :::info NON-TYPE SAFE API CALLS In real-world development, sometimes you may be unable to import Workflow Definition method signatures. When you don't have access to the Workflow Definition or it isn't written in .NET, you can still use non-type safe APIs and dynamic method invocation. Pass method names instead of method objects to: - [`TemporalClient.StartWorkflowAsync`](https://dotnet.temporal.io/api/Temporalio.Client.TemporalClient.html#Temporalio_Client_TemporalClient_StartWorkflowAsync_System_String_System_Collections_Generic_IReadOnlyCollection_System_Object__Temporalio_Client_WorkflowOptions_) - [`WorkflowHandle.QueryAsync`](https://dotnet.temporal.io/api/Temporalio.Client.WorkflowHandle.html#Temporalio_Client_WorkflowHandle_QueryAsync__1_System_String_System_Collections_Generic_IReadOnlyCollection_System_Object__Temporalio_Client_WorkflowQueryOptions_) - [`WorkflowHandle.SignalAsync`](https://dotnet.temporal.io/api/Temporalio.Client.WorkflowHandle.html#Temporalio_Client_WorkflowHandle_SignalAsync_System_String_System_Collections_Generic_IReadOnlyCollection_System_Object__Temporalio_Client_WorkflowSignalOptions_) - [`WorkflowHandle.ExecuteUpdateAsync`](https://dotnet.temporal.io/api/Temporalio.Client.WorkflowHandle.html#Temporalio_Client_WorkflowHandle_ExecuteUpdateAsync_System_String_System_Collections_Generic_IReadOnlyCollection_System_Object__Temporalio_Client_WorkflowUpdateOptions_) - [`WorkflowHandle.StartUpdateAsync`](https://dotnet.temporal.io/api/Temporalio.Client.WorkflowHandle.html#Temporalio_Client_WorkflowHandle_StartUpdateAsync_System_String_System_Collections_Generic_IReadOnlyCollection_System_Object__Temporalio_Client_WorkflowUpdateStartOptions_) Use non-type safe overloads of these APIs: - [`TemporalClient.GetWorkflowHandle`](https://dotnet.temporal.io/api/Temporalio.Client.TemporalClient.html#Temporalio_Client_TemporalClient_GetWorkflowHandle_System_String_System_String_System_String_) - [`Workflow.GetExternalWorkflowHandle`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_GetExternalWorkflowHandle_System_String_System_String_) ::: ## Message handler patterns {#message-handler-patterns} This section covers common write operations, such as Signal and Update handlers. It doesn't apply to pure read operations, like Queries or Update Validators. :::tip For additional information, see [Inject work into the main Workflow](/handling-messages#injecting-work-into-main-workflow) and [Ensuring your messages are processed exactly once](/handling-messages#exactly-once-message-processing). ::: ### Add async handlers to use `await` {#async-handlers} Signal and Update handlers can be asynchronous as well as blocking. Using asynchronous calls allows you to `await` Activities, Child Workflows, [`Workflow.DelayAsync`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html?#Temporalio_Workflows_Workflow_DelayAsync_System_Int32_System_Nullable_System_Threading_CancellationToken__) Timers, [`Workflow.WaitConditionAsync`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html?#Temporalio_Workflows_Workflow_WaitConditionAsync_System_Func_System_Boolean__System_Int32_System_Nullable_System_Threading_CancellationToken__) wait conditions, etc. This expands the possibilities for what can be done by a handler but it also means that handler executions and your main Workflow method are all running concurrently, with switching occurring between them at await calls. It's essential to understand the things that could go wrong in order to use asynchronous handlers safely. See [Workflow message passing](/encyclopedia/workflow-message-passing) for guidance on safe usage of async Signal and Update handlers, and the [Controlling handler concurrency](#control-handler-concurrency) and [Waiting for message handlers to finish](#wait-for-message-handlers) sections below. The following code executes an Activity that simulates a network call to a remote service: ```csharp public class MyActivities { private static readonly Dictionary Greetings = new() { [Language.Arabic] = "مرحبا بالعالم", [Language.Chinese] = "你好,世界", [Language.English] = "Hello, world", [Language.French] = "Bonjour, monde", [Language.Hindi] = "नमस्ते दुनिया", [Language.Spanish] = "Hola mundo", }; [Activity] public async Task CallGreetingServiceAsync(Language language) { // Pretend that we are calling a remove service await Task.Delay(200); return Greetings.TryGetValue(language, out var value) ? value : null; } } ``` The following code modifies a `WorkflowUpdate` for asynchronous use of the preceding Activity: ```csharp [Workflow] public class GreetingWorkflow { private readonly Mutex mutex = new(); // ... [WorkflowUpdate] public async Task SetLanguageAsync(Language language) { // 👉 Use a mutex here to ensure that multiple calls to SetLanguageAsync are processed in order. await mutex.WaitOneAsync(); try { if (!greetings.ContainsKey(language)) { var greeting = Workflow.ExecuteActivityAsync( (MyActivities acts) => acts.CallGreetingServiceAsync(language), new() { StartToCloseTimeout = TimeSpan.FromSeconds(10) }); if (greeting == null) { // 👉 An update validator cannot be async, so cannot be used to check that the remote // CallGreetingServiceAsync supports the requested language. Throwing ApplicationFailureException // will fail the Update, but the WorkflowExecutionUpdateAccepted event will still be // added to history. throw new ApplicationFailureException( $"Greeting service does not support {language}"); } greetings[language] = greeting; } var previousLanguage = CurrentLanguage; CurrentLanguage = language; return previousLanguage; } finally { mutex.ReleaseMutex(); } } } ``` After updating the code for asynchronous calls, your Update handler can schedule an Activity and await the result. Although an async Signal handler can initiate similar network tasks, using an Update handler allows the Client to receive a result or error once the Activity completes. This lets your Client track the progress of asynchronous work performed by the Update's Activities, Child Workflows, etc. ### Add wait conditions to block {#block-with-wait} Sometimes, async Signal or Update handlers need to meet certain conditions before they should continue. Using a wait condition with [`Workflow.WaitConditionAsync`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html?#Temporalio_Workflows_Workflow_WaitConditionAsync_System_Func_System_Boolean__System_Int32_System_Nullable_System_Threading_CancellationToken__) sets a function that prevents the code from proceeding until the condition returns `true`. This is an important feature that helps you control your handler logic. Here are two important use cases for `Workflow.WaitConditionAsync`: - Waiting in a handler until it is appropriate to continue. - Waiting in the main Workflow until all active handlers have finished. The condition state you're waiting for can be updated by and reflect any part of the Workflow code. This includes the main Workflow method, other handlers, or child coroutines spawned by the main Workflow method, and so forth. #### Use wait conditions in handlers {#wait-in-handlers} Sometimes, async Signal or Update handlers need to meet certain conditions before they should continue. Using a wait condition with [`Workflow.WaitConditionAsync`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html?#Temporalio_Workflows_Workflow_WaitConditionAsync_System_Func_System_Boolean__System_Int32_System_Nullable_System_Threading_CancellationToken__) sets a function that prevents the code from proceeding until the condition returns `true`. This is an important feature that helps you control your handler logic. Consider a `ReadyForUpdateToExecute` method that runs before your Update handler executes. The [`Workflow.WaitConditionAsync`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html?#Temporalio_Workflows_Workflow_WaitConditionAsync_System_Func_System_Boolean__System_Int32_System_Nullable_System_Threading_CancellationToken__) call waits until your condition is met: ```csharp [WorkflowUpdate] public async Task MyUpdateAsync(UpdateInput updateInput) { await Workflow.WaitConditionAsync(() => ReadyForUpdateToExecute(updateInput)); // ... } ``` Remember: Handlers can execute before the main Workflow method starts. #### Ensure your handlers finish before the Workflow completes {#wait-for-message-handlers} Workflow wait conditions can ensure your handler completes before a Workflow finishes. When your Workflow uses async Signal or Update handlers, your main Workflow method can return or continue-as-new while a handler is still waiting on an async task, such as an Activity result. The Workflow completing may interrupt the handler before it finishes crucial work and cause Client errors when trying retrieve Update results. Use `Workflow.AllHandlersFinished` to address this problem and allow your Workflow to end smoothly: ```csharp [Workflow] public class MyWorkflow { [WorkflowRun] public async Task RunAsync() { // ... await Workflow.WaitConditionAsync(() => Workflow.AllHandlersFinished); return "workflow-result"; } // ... ``` By default, your Worker will log a warning when you allow a Workflow Execution to finish with unfinished handler executions. You can silence these warnings on a per-handler basis by passing the `UnfinishedPolicy` argument to the [`WorkflowSignalAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowSignalAttribute.html) / [`WorkflowUpdateAttribute`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowUpdateAttribute.html) decorator: ```csharp [WorkflowUpdate(UnfinishedPolicy = HandlerUnfinishedPolicy.Abandon)] public async Task MyUpdateAsync() { // ... ``` See [Finishing handlers before the Workflow completes](/handling-messages#finishing-message-handlers) for more information. ### Use `[WorkflowInit]` to operate on Workflow input before any handler executes The `[WorkflowInit]` attribute gives message handlers access to [Workflow input](/handling-messages#workflow-initializers). When you use the `[WorkflowInit]` attribute on your constructor, you give the constructor the same Workflow parameters as your `[WorkflowRun]` method. The SDK will then ensure that your constructor receives the Workflow input arguments that the [Client sent](/develop/dotnet/temporal-client#start-workflow). The Workflow input arguments are also passed to your `[WorkflowRun]` method -- that always happens, whether or not you use the `[WorkflowInit]` attribute. Here's an example. The constructor and `RunAsync` must have the same parameters with the same types: ```csharp [Workflow] public class WorkflowInitWorkflow { public record Input(string Name); private readonly string nameWithTitle; private bool titleHasBeenChecked; [WorkflowInit] public WorkflowInitWorkflow(Input input) => nameWithTitle = $"Sir {input.Name}"; [WorkflowRun] public async Task RunAsync(Input ignored) { await Workflow.WaitConditionAsync(() => titleHasBeenChecked); return $"Hello, {nameWithTitle}"; } [WorkflowUpdate] public async Task CheckTitleValidityAsync() { // The handler is now guaranteed to see the workflow input after it has // been processed by the constructor. var valid = await Workflow.ExecuteActivityAsync( (MyActivities acts) -> acts.CheckTitleValidityAsync(nameWithTitle), new() { StartToCloseTimeout = TimeSpan.FromSeconds(10) }); titleHasBeenChecked = true; return valid; } } ``` ### Use locks to prevent concurrent handler execution {#control-handler-concurrency} Concurrent processes can interact in unpredictable ways. Incorrectly written [concurrent message-passing](/handling-messages#message-handler-concurrency) code may not work correctly when multiple handler instances run simultaneously. Here's an example of a pathological case: ```csharp [Workflow] public class MyWorkflow { // ... [WorkflowSignal] public async Task BadHandlerAsync() { var data = await Workflow.ExecuteActivityAsync( (MyActivities acts) => acts.FetchDataAsync(), new() { StartToCloseTimeout = TimeSpan.FromSeconds(10) }); this.x = data.X; // 🐛🐛 Bug!! If multiple instances of this handler are executing concurrently, then // there may be times when the Workflow has this.x from one Activity execution and this.y from another. await Workflow.DelayAsync(1000); this.y = data.Y; } } ``` Coordinating access with [`Workflows.Mutex`](https://dotnet.temporal.io/api/Temporalio.Workflows.Mutex.html), a mutual exclusion lock, corrects this code. Locking makes sure that only one handler instance can execute a specific section of code at any given time: ```csharp [Workflow] public class MyWorkflow { private readonly Mutex mutex = new(); // ... [WorkflowSignal] public async Task SafeHandlerAsync() { await mutex.WaitOneAsync(); try { var data = await Workflow.ExecuteActivityAsync( (MyActivities acts) => acts.FetchDataAsync(), new() { StartToCloseTimeout = TimeSpan.FromSeconds(10) }); this.x = data.X; // ✅ OK: the scheduler may switch now to a different handler execution, or to the main workflow // method, but no other execution of this handler can run until this execution finishes. await Workflow.DelayAsync(1000); this.y = data.Y; } finally { mutex.ReleaseMutex(); } } } ``` For additional concurrency options, you can use [`Workflows.Semaphore`](https://dotnet.temporal.io/api/Temporalio.Workflows.Semaphore.html). Semaphores manage access to shared resources and coordinate the order in which threads or processes execute. ## Message handler troubleshooting {#message-handler-troubleshooting} When sending a Signal, Update, or Query to a Workflow, your Client might encounter the following errors: - **The Client can't contact the server**: You'll receive a [`Temporalio.Exceptions.RpcException`](https://dotnet.temporal.io/api/Temporalio.Exceptions.RpcException.html) exception whose `Code` property is [`RpcException.StatusCode`](https://dotnet.temporal.io/api/Temporalio.Exceptions.RpcException.StatusCode.html) with a status of `Unavailable` (after some retries). - **The Workflow does not exist**: You'll receive a [`Temporalio.Exceptions.RpcException`](https://dotnet.temporal.io/api/Temporalio.Exceptions.RpcException.html) exception whose `Code` property is [`RpcException.StatusCode`](https://dotnet.temporal.io/api/Temporalio.Exceptions.RpcException.StatusCode.html) with a status of `NotFound`. See [Exceptions in message handlers](/handling-messages#exceptions) for a non–.NET-specific discussion of this topic. ### Problems when sending a Signal {#signal-problems} When using Signal, the only exception that will result from your requests during its execution is `RpcException`. All handlers may experience additional exceptions during the initial (pre-Worker) part of a handler request lifecycle. For Queries and Updates, the Client waits for a response from the Worker. If an issue occurs during the handler Execution by the Worker, the Client may receive an exception. ### Problems when sending an Update {#update-problems} When working with Updates, you may encounter these errors: - **No Workflow Workers are polling the Task Queue**: Your request will be retried by the SDK Client indefinitely. Use a `CancellationToken` in your [RPC options](https://dotnet.temporal.io/api/Temporalio.Client.WorkflowUpdateOptions.html#Temporalio_Client_WorkflowUpdateOptions_Rpc) to cancel the Update. This raises a [Temporalio.Exceptions.WorkflowUpdateRpcTimeoutOrCanceledException](https://dotnet.temporal.io/api/Temporalio.Exceptions.WorkflowUpdateRpcTimeoutOrCanceledException.html) exception . - **Update failed**: You'll receive a [`Temporalio.Exceptions.WorkflowUpdateFailedException`](https://dotnet.temporal.io/api/Temporalio.Exceptions.WorkflowUpdateFailedException.html) exception. There are two ways this can happen: - The Update was rejected by an Update validator defined in the Workflow alongside the Update handler. - The Update failed after having been accepted. Update failures are like [Workflow failures](/references/failures). Issues that cause a Workflow failure in the main method also cause Update failures in the Update handler. These might include: - A failed Child Workflow - A failed Activity (if the Activity retries have been set to a finite number) - The Workflow author raising `ApplicationFailure` - Any error listed in [`TemporalWorkerOptions.WorkflowFailureExceptionTypes`](https://dotnet.temporal.io/api/Temporalio.Worker.TemporalWorkerOptions.html#Temporalio_Worker_TemporalWorkerOptions_WorkflowFailureExceptionTypes) on the Worker or [`WorkflowAttribute.FailureExceptionTypes`](https://dotnet.temporal.io/api/Temporalio.Workflows.WorkflowAttribute.html#Temporalio_Workflows_WorkflowAttribute_FailureExceptionTypes) on the Workflow (empty by default) - **The handler caused the Workflow Task to fail**: A [Workflow Task Failure](/references/failures) causes the server to retry Workflow Tasks indefinitely. What happens to your Update request depends on its stage: - If the request hasn't been accepted by the server, you receive a `FAILED_PRECONDITION` [`Temporalio.Exceptions.RpcException`](https://dotnet.temporal.io/api/Temporalio.Exceptions.RpcException.html) exception. - If the request has been accepted, it is durable. Once the Workflow is healthy again after a code deploy, use an [`UpdateHandle`](https://dotnet.temporal.io/api/Temporalio.Client.WorkflowUpdateHandle.html) to fetch the Update result. - **The Workflow finished while the Update handler execution was in progress**: You'll receive a [`Temporalio.Exceptions.RpcException`](https://dotnet.temporal.io/api/Temporalio.Exceptions.RpcException.html) "workflow execution already completed". This will happen if the Workflow finished while the Update handler execution was in progress, for example because - The Workflow was canceled or failed. - The Workflow completed normally or continued-as-new and the Workflow author did not [wait for handlers to be finished](/handling-messages#finishing-message-handlers). ### Problems when sending a Query {#query-problems} When working with Queries, you may encounter these errors: - **There is no Workflow Worker polling the Task Queue**: You'll receive a [`Temporalio.Exceptions.RpcException`](https://dotnet.temporal.io/api/Temporalio.Exceptions.RpcException.html) on which the `Code` is a [`RpcException.StatusCode`](https://dotnet.temporal.io/api/Temporalio.Exceptions.RpcException.StatusCode.html) with a status of `FailedPrecondition`. - **Query failed**: You'll receive a [`Temporalio.Exceptions.WorkflowQueryFailedException`](https://dotnet.temporal.io/api/Temporalio.Exceptions.WorkflowQueryFailedException.html) exception if something goes wrong during a Query. Any exception in a Query handler will trigger this error. This differs from Signal and Update requests, where exceptions can lead to Workflow Task Failure instead. - **The handler caused the Workflow Task to fail.** This would happen, for example, if the Query handler blocks the thread for too long without yielding. ## Dynamic Handler {#dynamic-handler} Temporal supports Dynamic Queries, Signals, Updates, Workflows, and Activities. These are unnamed handlers that are invoked if no other statically defined handler with the given name exists. Dynamic Handlers provide flexibility to handle cases where the names of Queries, Signals, Updates, Workflows, or Activities, aren't known at run time. :::caution Dynamic Handlers should be used judiciously as a fallback mechanism rather than the primary approach. Overusing them can lead to maintainability and debugging issues down the line. Instead, Signals, Queries, Workflows, or Activities should be defined statically whenever possible, with clear names that indicate their purpose. Use static definitions as the primary way of structuring your Workflows. Reserve Dynamic Handlers for cases where the handler names are not known at compile time and need to be looked up dynamically at runtime. They are meant to handle edge cases and act as a catch-all, not as the main way of invoking logic. ::: ### Set a Dynamic Query {#set-a-dynamic-query} A Dynamic Query in Temporal is a Query method that is invoked dynamically at runtime if no other Query with the same name is registered. A Query can be made dynamic by setting `Dynamic` to `true` on the `[WorkflowQuery]` attribute. Only one Dynamic Query can be present on a Workflow. The Query Handler parameters must accept a `string` name and `Temporalio.Converters.IRawValue[]` for the arguments. The [Workflow.PayloadConverter](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_PayloadConverter) property is used to convert an `IRawValue` object to the desired type using extension methods in the `Temporalio.Converters` Namespace. ```csharp [WorkflowQuery(Dynamic = true)] public string DynamicQueryAsync(string queryName, IRawValue[] args) { var input = Workflow.PayloadConverter.ToValue(args.Single()); return statuses[input.Type]; } ``` ### Set a Dynamic Signal {#set-a-dynamic-signal} A Dynamic Signal in Temporal is a Signal that is invoked dynamically at runtime if no other Signal with the same input is registered. A Signal can be made dynamic by setting `Dynamic` to `true` on the `[WorkflowSignal]` attribute. Only one Dynamic Signal can be present on a Workflow. The Signal Handler parameters must accept a `string` name and `Temporalio.Converters.IRawValue[]` for the arguments. The [Workflow.PayloadConverter](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_PayloadConverter) property is used to convert an `IRawValue` object to the desired type using extension methods in the `Temporalio.Converters` Namespace. ```csharp [WorkflowSignal(Dynamic = true)] public async Task DynamicSignalAsync(string signalName, IRawValue[] args) { var input = Workflow.PayloadConverter.ToValue(args.Single()); pendingThings.Add(input); } ``` ### Set a Dynamic Update {#set-a-dynamic-update} A Dynamic Update in Temporal is an Update that is invoked dynamically at runtime if no other Update with the same input is registered. An Update can be made dynamic by setting `Dynamic` to `true` on the `[WorkflowUpdate]` attribute. Only one Dynamic Update can be present on a Workflow. The Update Handler parameters must accept a `string` name and `Temporalio.Converters.IRawValue[]` for the arguments. The [Workflow.PayloadConverter](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_PayloadConverter) property is used to convert an `IRawValue` object to the desired type using extension methods in the `Temporalio.Converters` Namespace. ```csharp [WorkflowUpdate(Dynamic = true)] public async Task DynamicUpdateAsync(string updateName, IRawValue[] args) { var input = Workflow.PayloadConverter.ToValue(args.Single()); pendingThings.Add(input); return statuses[input.Type]; } ``` --- ## Observability - .NET SDK This page covers features related to viewing the state of the application, including: - [Metrics](#metrics) - [Tracing](#tracing) - [Logging](#logging) - [Visibility](#visibility) The observability feature guide covers the many ways to view the current state of your [Temporal Application](/temporal#temporal-application). This includes the ways to view which [Workflow Executions](/workflow-execution) are tracked by the [Temporal Platform](/temporal#temporal-platform) and the state of any specified Workflow Execution, either currently or at points of an execution. ## Emit metrics {#metrics} **How to emit metrics using the Temporal .NET SDK** Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. For a complete list of metrics capable of being emitted, see the [SDK metrics reference](/references/sdk-metrics). - For an overview of Prometheus and Grafana integration, refer to the [Monitoring](/self-hosted-guide/monitoring) guide. - For a list of metrics, see the [SDK metrics reference](/references/sdk-metrics). - For an end-to-end example that exposes metrics with the .NET SDK, refer to the [samples-dotnet](https://github.com/temporalio/samples-dotnet/tree/main/src/OpenTelemetry) repo. Metrics in .NET are configured on the `Metrics` property of the `Telemetry` property on the `TemporalRuntime`. That object should be created globally and should be used for all clients; therefore, you should configure this before any other Temporal code. ### Set a Prometheus endpoint **How to set a Prometheus endpoint using the .NET SDK** The following example exposes a Prometheus endpoint on port `9000`. ```csharp using Temporalio.Client; using Temporalio.Runtime; var runtime = new TemporalRuntime(new() { Telemetry = new() { Metrics = new() { Prometheus = new("0.0.0.0:9000") } }, }); var client = await Temporalio.ConnectAsync(new("localhost:7233") { Runtime = runtime }); ``` ### Set a custom metric meter **How to reuse the .NET metric meter using the Temporal .NET SDK** A custom metric meter can be set on the telemetry options to handle metrics programmatically. The [Temporalio.Extensions.DiagnosticSource](https://github.com/temporalio/sdk-dotnet/tree/main/src/Temporalio.Extensions.DiagnosticSource) extension provides a custom metric meter implementation that sends all metrics to a [System.Diagnostics.Metrics.Meter](https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.metrics.meter) instance. ```csharp using System.Diagnostics.Metrics; using Temporalio.Client; using Temporalio.Extensions.DiagnosticSource; using Temporalio.Runtime; // Create .NET meter using var meter = new Meter("My.Meter"); // Can create MeterListener or OTel meter provider here... // Create Temporal runtime with a custom metric meter for that meter var runtime = new TemporalRuntime(new() { Telemetry = new() { Metrics = new() { CustomMetricMeter = new CustomMetricMeter(meter) }, }, }); var client = await Temporalio.ConnectAsync(new("localhost:7233") { Runtime = runtime }); ``` ## Setup Tracing {#tracing} **How to configure tracing using the Temporal .NET SDK** Tracing allows you to view the call graph of a Workflow along with its Activities, Nexus Operations, and any Child Workflows. To configure OpenTelemetry tracing in .NET, use the [Temporalio.Extensions.OpenTelemetry](https://github.com/temporalio/sdk-dotnet/tree/main/src/Temporalio.Extensions.OpenTelemetry) extension. The [`Temporalio.Extensions.OpenTelemetry.TracingInterceptor`](https://dotnet.temporal.io/api/Temporalio.Extensions.OpenTelemetry.TracingInterceptor.html) class can be set as an interceptor in the client options. When your Client is connected, spans are created for all Client calls, Activities, and Workflow invocations on the Worker. Spans are created and serialized through the server to give one trace for a Workflow Execution. ## Log from a Workflow {#logging} **How to log from a Workflow to Temporal .NET SDK** Logging enables you to record critical information during code execution. Loggers create an audit trail and capture information about your Workflow's operation. An appropriate logging level depends on your specific needs. During development or troubleshooting, you might use debug or even trace. In production, you might use info or warn to avoid excessive log volume. The logger supports the following logging levels: | Level | Use | | ------- | --------------------------------------------------------------------------------------------------------- | | `TRACE` | The most detailed level of logging, used for very fine-grained information. | | `DEBUG` | Detailed information, typically useful for debugging purposes. | | `INFO` | General information about the application's operation. | | `WARN` | Indicates potentially harmful situations or minor issues that don't prevent the application from working. | | `ERROR` | Indicates error conditions that might still allow the application to continue running. | The Temporal SDK core normally uses `WARN` as its default logging level. Logging uses the .NET standard logging APIs. The `LoggerFactory` can be set in the client. The following example shows logging on the console and sets the level to `Information`. ```csharp var client = await TemporalClient.ConnectAsync(new("localhost:7233") { LoggerFactory = LoggerFactory.Create(builder => builder. AddSimpleConsole(options => options.TimestampFormat = "[HH:mm:ss] "). SetMinimumLevel(LogLevel.Information)), }); ``` You can log from a Workflow using `Workflow.Logger` which is an instance of .NET's `ILogger`. ```csharp Workflow.Logger.LogInformation("Given name: {Name}", name); ``` ## Use Visibility APIs {#visibility} **How to use Visibility APIs using the Temporal .NET SDK** The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Temporal Service. ### Use Search Attributes {#search-attributes} **How to use Search Attributes using the Temporal .NET SDK** The typical method of retrieving a Workflow Execution is by its Workflow Id. However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments. You can do this with [Search Attributes](/search-attribute). - [Default Search Attributes](/search-attribute#default-search-attribute) like `WorkflowType`, `StartTime` and `ExecutionStatus` are automatically added to Workflow Executions. - _Custom Search Attributes_ can contain their own domain-specific data (like `customerId` or `numItems`). - A few [generic Custom Search Attributes](/search-attribute#custom-search-attribute) like `CustomKeywordField` and `CustomIntField` are created by default in Temporal's [Docker Compose](https://github.com/temporalio/docker-compose). The steps to using custom Search Attributes are: - Create a new Search Attribute in your Temporal Service in the CLI or Web UI. - For example: `temporal operator search-attribute create --name CustomKeywordField --type Text` - Replace `CustomKeywordField` with the name of your Search Attribute. - Replace `Text` with a type value associated with your Search Attribute: `Text` | `Keyword` | `Int` | `Double` | `Bool` | `Datetime` | `KeywordList` - Set the value of the Search Attribute for a Workflow Execution: - On the Client by including it as an option when starting the Execution. - In the Workflow by calling `UpsertTypedSearchAttributes`. - Read the value of the Search Attribute: - On the Client by calling `Describe` on a `WorkflowHandle`. - In the Workflow by looking at `WorkflowInfo`. - Query Workflow Executions by the Search Attribute using a [List Filter](/list-filter): - [In the Temporal CLI](/cli/operator#list-2) - In code by calling `ListWorkflowsAsync`. ### List Workflow Executions {#list-workflow-executions} **How to list Workflow Executions using the .NET SDK** Use the [ListWorkflowsAsync()](https://dotnet.temporal.io/api/Temporalio.Client.ITemporalClient.html#Temporalio_Client_ITemporalClient_ListWorkflowsAsync_System_String_Temporalio_Client_WorkflowListOptions_) method on the Client and pass a [List Filter](/list-filter) as an argument to filter the listed Workflows. The result is an async enumerable. ```csharp await foreach (var wf in client.ListWorkflowsAsync("WorkflowType='GreetingWorkflow'")) { Console.WriteLine("Workflow: {0}", wf.Id); } ``` ### Set Custom Search Attributes {#custom-search-attributes} **How to use custom Search Attributes using the Temporal .NET SDK** After you've created custom Search Attributes in your Temporal Service (using `temporal operator search-attribute create`or the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow. To set custom Search Attributes, use the `TypedSearchAttributes` property on `WorkflowOptions` for `StartWorkflowAsync` or `ExecuteWorkflowAsync`. Typed search attributes are a `SearchAttributeCollection` created with a builder. ```csharp // This only needs to be created once, so it is common to make it a static readonly even though we // create inline here for demonstration var myKeywordAttributeKey = SearchAttributeKey.CreateKeyword("MyKeywordAttribute"); // Start workflow with the search attribute collection var handle = await client.StartWorkflowAsync( (MyWorkflow wf) => wf.RunAsync(), new(id: "my-workflow-id", taskQueue: "my-task-queue") { TypedSearchAttributes = new SearchAttributeCollection.Builder(). Set(myKeywordAttributeKey, "SomeKeywordValue"). ToSearchAttributeCollection(), }); ``` ### Upsert Search Attributes {#upsert-search-attributes} **How to upsert custom Search Attributes using the Temporal .NET SDK** You can upsert Search Attributes to add, update, or remove Search Attributes from within Workflow code. To upsert custom Search Attributes, use the [`UpsertTypedSearchAttributes()`](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_UpsertTypedSearchAttributes_Temporalio_Workflows_SearchAttributeUpdate___) method with a set of updates. Keys can be predefined for reuse. ```csharp // These only need to be created once, so it is common to make them static readonly even though we // create inline here for demonstration var myKeywordAttributeKey = SearchAttributeKey.CreateKeyword("MyKeywordAttribute"); var myTextAttributeKey = SearchAttributeKey.CreateText("MyTextAttribute"); // Add/Update the keyword one and remove the text one Workflow.UpsertTypedSearchAttributes( myKeywordAttributeKey.ValueSet("SomeKeywordValue"), myTextAttrbiuteKey.ValueUnset()); ``` --- ## Schedules - .NET SDK This page shows how to do the following: - [Schedule a Workflow](#schedule-a-workflow) - [Create a Scheduled Workflow](#create-a-workflow) - [Backfill a Scheduled Workflow](#backfill-a-scheduled-workflow) - [Delete a Scheduled Workflow](#delete-a-scheduled-workflow) - [Describe a Scheduled Workflow](#describe-a-scheduled-workflow) - [List a Scheduled Workflow](#list-a-scheduled-workflow) - [Pause a Scheduled Workflow](#pause-a-scheduled-workflow) - [Trigger a Scheduled Workflow](#trigger-a-scheduled-workflow) - [Update a Scheduled Workflow](#update-a-scheduled-workflow) - [Use Start Delay](#start-delay) ## Schedule a Workflow {#schedule-a-workflow} **How to Schedule a Workflow using the Temporal .NET SDK** Scheduling Workflows is a crucial aspect of any automation process, especially when dealing with time-sensitive tasks. By scheduling a Workflow, you can automate repetitive tasks, reduce the need for manual intervention, and ensure timely execution of your business processes Use any of the following action to help Schedule a Workflow Execution and take control over your automation process. ### Create a Scheduled Workflow {#create-a-workflow} **How to create a Scheduled Workflow using the Temporal .NET SDK** The create action enables you to create a new Schedule. When you create a new Schedule, a unique Schedule ID is generated, which you can use to reference the Schedule in other Schedule commands. To create a Scheduled Workflow Execution in .NET, use the [CreateScheduleAsync](https://dotnet.temporal.io/api/Temporalio.Client.ITemporalClient.html#Temporalio_Client_ITemporalClient_CreateScheduleAsync_System_String_Temporalio_Client_Schedules_Schedule_Temporalio_Client_Schedules_ScheduleOptions_) method on the Client. Then pass the Schedule ID and the Schedule object to the method to create a Scheduled Workflow Execution. Set the Schedule's `Action` property to an instance of `ScheduleActionStartWorkflow` to schedule a Workflow Execution. ```csharp using Temporalio.Client; using Temporalio.Client.Schedules; var client = await TemporalClient.ConnectAsync(new("localhost:7233")); var handle = await client.CreateScheduleAsync( "my-schedule-id", new( Action: ScheduleActionStartWorkflow.Create( (MyWorkflow wf) => wf.RunAsync(), new(id: "my-workflow-id", taskQueue: "my-task-queue")), Spec: new() { Intervals = new List { new(Every: TimeSpan.FromDays(5)) }, })); ``` :::tip Schedule Auto-Deletion Once a Schedule has completed creating all its Workflow Executions, the Temporal Service deletes it since it won’t fire again. The Temporal Service doesn't guarantee when this removal will happen. ::: ### Backfill a Scheduled Workflow {#backfill-a-scheduled-workflow} **How to backfill a Scheduled Workflow using the Temporal .NET SDK** The backfill action executes Actions ahead of their specified time range. This command is useful when you need to execute a missed or delayed Action, or when you want to test the Workflow before its scheduled time. To backfill a Scheduled Workflow Execution in .NET, use the [BackfillAsync()](https://dotnet.temporal.io/api/Temporalio.Client.Schedules.ScheduleHandle.html#Temporalio_Client_Schedules_ScheduleHandle_BackfillAsync_System_Collections_Generic_IReadOnlyCollection_Temporalio_Client_Schedules_ScheduleBackfill__Temporalio_Client_RpcOptions_) method on the Schedule Handle. ```csharp using Temporalio.Client; using Temporalio.Client.Schedules; var client = await TemporalClient.ConnectAsync(new("localhost:7233")); var handle = client.GetScheduleHandle("my-schedule-id"); var now = DateTime.Now; await handle.BackfillAsync(new List { new( StartAt: now - TimeSpan.FromDays(30), EndAt: now - TimeSpan.FromDays(20), Overlap: ScheduleOverlapPolicy.AllowAll), }); ``` ### Delete a Scheduled Workflow {#delete-a-scheduled-workflow} **How to delete a Scheduled Workflow using the Temporal .NET SDK** The delete action enables you to delete a Schedule. When you delete a Schedule, it does not affect any Workflows that were started by the Schedule. To delete a Scheduled Workflow Execution in .NET, use the [DeleteAsync()](https://dotnet.temporal.io/api/Temporalio.Client.Schedules.ScheduleHandle.html#Temporalio_Client_Schedules_ScheduleHandle_DeleteAsync_Temporalio_Client_RpcOptions_) method on the Schedule Handle. ```csharp using Temporalio.Client; using Temporalio.Client.Schedules; var client = await TemporalClient.ConnectAsync(new("localhost:7233")); var handle = client.GetScheduleHandle("my-schedule-id"); await handle.DeleteAsync(); ``` ### Describe a Scheduled Workflow {#describe-a-scheduled-workflow} **How to describe a Scheduled Workflow using the Temporal .NET SDK** The describe action shows the current Schedule configuration, including information about past, current, and future Workflow Runs. This command is helpful when you want to get a detailed view of the Schedule and its associated Workflow Runs. To describe a Scheduled Workflow Execution in .NET, use the [DescribeAsync()](https://dotnet.temporal.io/api/Temporalio.Client.Schedules.ScheduleHandle.html#Temporalio_Client_Schedules_ScheduleHandle_DescribeAsync_Temporalio_Client_RpcOptions_) method on the Schedule Handle. ```csharp using Temporalio.Client; using Temporalio.Client.Schedules; var client = await TemporalClient.ConnectAsync(new("localhost:7233")); var handle = client.GetScheduleHandle("my-schedule-id"); var desc = await handle.DescribeAsync(); Console.WriteLine("Schedule info: {0}", desc.Info); ``` ### List a Scheduled Workflow {#list-a-scheduled-workflow} **How to list a Scheduled Workflow using the Temporal .NET SDK** The list action lists all the available Schedules. This command is useful when you want to view a list of all the Schedules and their respective Schedule IDs. To list all schedules, use the [ListSchedulesAsync()](https://dotnet.temporal.io/api/Temporalio.Client.ITemporalClient.html#Temporalio_Client_ITemporalClient_ListSchedulesAsync_Temporalio_Client_Schedules_ScheduleListOptions_) asynchronous method on the Client. This returns an async enumerable. If a schedule is added or deleted, it may not be available in the list immediately. ```csharp using Temporalio.Client; using Temporalio.Client.Schedules; var client = await TemporalClient.ConnectAsync(new("localhost:7233")); await foreach (var desc in client.ListSchedulesAsync()) { Console.WriteLine("Schedule info: {0}", desc.Info); } ``` ### Pause a Scheduled Workflow {#pause-a-scheduled-workflow} **How to pause a Scheduled Workflow using the Temporal .NET SDK** The pause action enables you to pause and unpause a Schedule. When you pause a Schedule, all the future Workflow Runs associated with the Schedule are temporarily stopped. This command is useful when you want to temporarily halt a Workflow due to maintenance or any other reason. To pause a Scheduled Workflow Execution in .NET, use the [PauseAsync()](https://dotnet.temporal.io/api/Temporalio.Client.Schedules.ScheduleHandle.html#Temporalio_Client_Schedules_ScheduleHandle_PauseAsync_System_String_Temporalio_Client_RpcOptions_) method on the Schedule Handle. You can pass a note to the `PauseAsync()` method to provide a reason for pausing the schedule. ```csharp using Temporalio.Client; using Temporalio.Client.Schedules; var client = await TemporalClient.ConnectAsync(new("localhost:7233")); var handle = client.GetScheduleHandle("my-schedule-id"); await handle.PauseAsync("Pausing the schedule for now"); ``` ### Trigger a Scheduled Workflow {#trigger-a-scheduled-workflow} **How to trigger a Scheduled Workflow using the Temporal .NET SDK** The trigger action triggers an immediate action with a given Schedule. By default, this action is subject to the Overlap Policy of the Schedule. This command is helpful when you want to execute a Workflow outside of its scheduled time. To trigger a Scheduled Workflow Execution in .NET, use the [TriggerAsync()](https://dotnet.temporal.io/api/Temporalio.Client.Schedules.ScheduleHandle.html#Temporalio_Client_Schedules_ScheduleHandle_TriggerAsync_Temporalio_Client_Schedules_ScheduleTriggerOptions_) method on the Schedule Handle. ```csharp using Temporalio.Client; using Temporalio.Client.Schedules; var client = await TemporalClient.ConnectAsync(new("localhost:7233")); var handle = client.GetScheduleHandle("my-schedule-id"); await handle.TriggerAsync(); ``` ### Update a Scheduled Workflow {#update-a-scheduled-workflow} **How to update a Scheduled Workflow using the Temporal .NET SDK** The update action enables you to update an existing Schedule. This command is useful when you need to modify the Schedule's configuration, such as changing the start time, end time, or interval. To update a Scheduled Workflow Execution in .NET, use the [UpdateAsync()](https://dotnet.temporal.io/api/Temporalio.Client.Schedules.ScheduleHandle.html#Temporalio_Client_Schedules_ScheduleHandle_UpdateAsync_System_Func_Temporalio_Client_Schedules_ScheduleUpdateInput_Temporalio_Client_Schedules_ScheduleUpdate__Temporalio_Client_RpcOptions_) method on the Schedule Handle. This method accepts a callback that provides input with the current schedule. A new schedule can be created and returned from that callback to perform the update. ```csharp using Temporalio.Client; using Temporalio.Client.Schedules; var client = await TemporalClient.ConnectAsync(new("localhost:7233")); var handle = client.GetScheduleHandle("my-schedule-id"); await handle.UpdateAsync(input => { var newAction = ScheduleActionStartWorkflow.Create( (MyWorkflow wf) => wf.RunAsync(), new(id: "my-workflow-id", taskQueue: "my-task-queue")); return new(input.Description.Schedule with { Action = newAction }); }); ``` ## Use Start Delay {#start-delay} **How to use Start Delay using the Temporal .NET SDK** Use the `StartDelay` to schedule a Workflow Execution at a specific one-time future point rather than on a recurring schedule. Use the `StartDelay` option on `WorkflowOptions` in either the `StartWorkflowAsync()` or `ExecuteWorkflowAsync()` methods in the Client. ```csharp var handle = await client.StartWorkflowAsync( (MyWorkflow wf) => wf.RunAsync(), new(id: "my-workflow-id", taskQueue: "my-task-queue") { StartDelay = TimeSpan.FromHours(3), }); ``` --- ## Set up your local with the .NET SDK --- # Quickstart Configure your local development environment to get started developing with Temporal. The .NET SDK requires .NET 6.0 or later. Install .NET by following the official .NET instructions. }> ## Install .NET The .NET SDK requires .NET 6.0 or later. Install .NET by following the [official .NET instructions](https://dotnet.microsoft.com/download/dotnet/6.0). {`# Create solution and projects mkdir TemporalioHelloWorld cd TemporalioHelloWorld dotnet new sln -n TemporalioHelloWorld dotnet new classlib -o Workflow dotnet new console -o Worker dotnet new console -o Client --- # Add projects to the solution dotnet sln TemporalioHelloWorld.sln add Workflow/Workflow.csproj Worker/Worker.csproj Client/Client.csproj --- # Add project references dotnet add Worker/Worker.csproj reference Workflow/Workflow.csproj dotnet add Client/Client.csproj reference Workflow/Workflow.csproj --- # Install Temporal SDK in each project dotnet add Workflow/Workflow.csproj package Temporalio dotnet add Worker/Worker.csproj package Temporalio dotnet add Client/Client.csproj package Temporalio` } Build the solution: {`dotnet build`} }> ## Install the Temporal .NET SDK Create a solution and the three projects used in this guide: `Workflow` (class library), `Worker` (console), and `Client` (console). Add them to the solution. Tip: You can also centralize the `Temporalio` package for all projects using `Directory.Packages.props` and `Directory.Build.props` at the solution root. Install the Temporal CLI using Homebrew: {`brew install temporal`} Download the Temporal CLI archive for your architecture: Windows amd64 Windows arm64 Extract it and add temporal.exe to your PATH. Download the Temporal CLI for your architecture: Linux amd64 Linux arm64 Extract the archive and move the temporal binary into your PATH, for example: {`sudo mv temporal /usr/local/bin`} }> ## Install Temporal CLI and start the development server The fastest way to get a development version of the Temporal Service running on your local machine is to use [Temporal CLI](https://docs.temporal.io/cli). Choose your operating system to install Temporal CLI: After installing, open a new Terminal window and start the development server: {`temporal server start-dev`} Change the Web UI port The Temporal Web UI may be on a different port in some examples or tutorials. To change the --ui-port option when starting the server: {`temporal server start-dev --ui-port 8080`} The Temporal Web UI will now be available at http://localhost:8080. }> ## Start the development server Once you've installed Temporal CLI and added it to your PATH, open a new Terminal window and run the following command. This command starts a local Temporal Service. It starts the Web UI, creates the default Namespace, and uses an in-memory database. The Temporal Service will be available on localhost:7233. The Temporal Web UI will be available at http://localhost:8233. Leave the local Temporal Service running as you work through tutorials and other projects. You can stop the Temporal Service at any time by pressing CTRL+C. Once you have everything installed, you're ready to build apps with Temporal on your local machine. ## Run Hello World: Test Your Installation Now let's verify your setup is working by creating and running a complete Temporal application with both a Workflow and Activity. This test will confirm that: - Your .NET SDK installation is working - Your local Temporal Service is running - You can successfully create and execute Workflows and Activities - The communication between components is functioning correctly
Tip: Example Directory Structure ```text TemporalioHelloWorld/ ├── Client/ │ ├── Client.csproj │ └── Program.cs # Starts a workflow ├── Worker/ │ ├── Worker.csproj │ └── Program.cs # Runs a worker ├── Workflow/ │ ├── Workflow.csproj │ ├── MyActivities.cs # Activity definition │ └── SayHelloWorkflow.cs # Workflow definition └── TemporalioHelloWorld.sln ```
### 1. Create the Activity and Workflow #### Create an Activity file (MyActivities.cs) in the Workflow project: ```csharp namespace MyNamespace; using Temporalio.Activities; public class MyActivities { // Activities can be async and/or static too! We just demonstrate instance // methods since many will use them that way. [Activity] public string SayHello(string name) => $"Hello, {name}!"; } ``` An Activity is a normal function or method that executes a single, well-defined action (either short or long running), which often involve interacting with the outside world, such as sending emails, making network requests, writing to a database, or calling an API, which are prone to failure. If an Activity fails, Temporal automatically retries it based on your configuration. #### Create a Workflow file (SayHelloWorkflow.cs) in the Workflow project: ```csharp namespace MyNamespace; using Temporalio.Workflows; [Workflow] public class SayHelloWorkflow { [WorkflowRun] public async Task RunAsync(string name) { // This workflow just runs a simple activity to completion. // StartActivityAsync could be used to just start and there are many // other things that you can do inside a workflow. return await Workflow.ExecuteActivityAsync( // This is a lambda expression where the instance is typed. If this // were static, you wouldn't need a parameter. (MyActivities act) => act.SayHello(name), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) } ); } } ``` Workflows orchestrate Activities and contain the application logic. Temporal Workflows are resilient. They can run and keep running for years, even if the underlying infrastructure fails. If the application itself crashes, Temporal will automatically recreate its pre-failure state so it can continue right where it left off. ### 2. Create the Worker With your Activity and Workflow defined, you need a Worker to execute them. #### Create a Worker file (Program.cs) in the Worker project: ```csharp using MyNamespace; using Temporalio.Client; using Temporalio.Worker; // Create a client to localhost on "default" namespace var client = await TemporalClient.ConnectAsync(new("localhost:7233")); // Cancellation token to shutdown worker on ctrl+c using var tokenSource = new CancellationTokenSource(); Console.CancelKeyPress += (_, eventArgs) => { tokenSource.Cancel(); eventArgs.Cancel = true; }; // Create an activity instance since we have instance activities. If we had // all static activities, we could just reference those directly. var activities = new MyActivities(); // Create worker with the activity and workflow registered using var worker = new TemporalWorker( client, new TemporalWorkerOptions("my-task-queue") .AddActivity(activities.SayHello) .AddWorkflow() ); // Run worker until cancelled Console.WriteLine("Running worker"); try { await worker.ExecuteAsync(tokenSource.Token); } catch (OperationCanceledException) { Console.WriteLine("Worker cancelled"); } ``` Run the Worker: ```bash dotnet run --project Worker/Worker.csproj ``` Keep this terminal running - you should see `Running worker` displayed. A Worker polls a Task Queue, that you configure it to poll, looking for work to do. Once the Worker dequeues the Workflow or Activity task from the Task Queue, it then executes that task. Workers are a crucial part of your Temporal application as they're what actually execute the tasks defined in your Workflows and Activities. For more information on Workers, see [Understanding Temporal](/evaluate/understanding-temporal#workers) and a [deep dive into Workers](/workers). ### 3. Execute the Workflow Now that your Worker is running, it's time to start a Workflow Execution. This final step will validate that everything is working correctly. #### Create a Client file (Program.cs) in the Client project: ```csharp using MyNamespace; using Temporalio.Client; // Create a client to localhost on "default" namespace var client = await TemporalClient.ConnectAsync(new("localhost:7233")); // Run workflow var result = await client.ExecuteWorkflowAsync( (SayHelloWorkflow wf) => wf.RunAsync("Temporal"), new(id: $"my-workflow-id-{Guid.NewGuid()}", taskQueue: "my-task-queue") ); Console.WriteLine("Workflow result: {0}", result); ``` While the Worker is still running, run the Workflow: ```bash dotnet run --project Client/Client.csproj ``` ### Verify Success If everything is working correctly, you should see: - Worker processing the workflow and activity - Output: `Workflow result: Hello Temporal` - Workflow Execution details in the [Temporal Web UI](http://localhost:8233) Next: Run your first Temporal Application Create a basic Workflow and run it with the Temporal .NET SDK --- ## Temporal Client - .NET SDK A [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) enables you to communicate with the Temporal Service. Communication with a Temporal Service lets you perform actions such as starting Workflow Executions, sending Signals and Queries to Workflow Executions, getting Workflow results, and more. This page shows you how to do the following using the .NET SDK with the Temporal Client: - [Connect to a local development Temporal Service](#connect-to-development-service) - [Connect to Temporal Cloud](#connect-to-temporal-cloud) - [Start a Workflow Execution](#start-workflow) - [Get Workflow results](#get-workflow-results) A Temporal Client cannot be initialized and used inside a Workflow. However, it is acceptable and common to use a Temporal Client inside an Activity to communicate with a Temporal Service. ## Connect to development Temporal Service {#connect-to-development-service} Use [`TemporalClient.ConnectAsync`](https://dotnet.temporal.io/api/Temporalio.Client.TemporalClient.html#Temporalio_Client_TemporalClient_ConnectAsync_Temporalio_Client_TemporalClientConnectOptions_) to create a client. Connection options include the Temporal Server address, Namespace, and (optionally) TLS configuration. You can provide these options directly in code, or load them from **environment variables** and/or a **TOML configuration file** using the `Temporalio.Client.EnvConfig` helpers. We recommend environment variables or a configuration file for secure, repeatable configuration. When you’re running a Temporal Service locally (such as with the [Temporal CLI dev server](https://docs.temporal.io/cli/server#start-dev)), the required options are minimal. If you don't specify a host/port, most connections default to `127.0.0.1:7233` and the `default` Namespace. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the configuration file path, the SDK looks for it at the path `~/.config/temporalio/temporal.toml` or the equivalent on your OS. Refer to [Environment Configuration](../environment-configuration.mdx#configuration-methods) for more details about configuration files and profiles. :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines two profiles: `default` and `prod`. Each profile has its own set of connection options. ```toml title="config.toml" --- # Default profile for local development [profile.default] address = "localhost:7233" namespace = "default" --- # Optional: Add custom gRPC headers [profile.default.grpc_meta] my-custom-header = "development-value" trace-id = "dev-trace-123" --- # Production profile for Temporal Cloud [profile.prod] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" --- # TLS configuration for production [profile.prod.tls] --- # TLS auto-enables when TLS config or an API key is present --- # disabled = false client_cert_path = "/etc/temporal/certs/client.pem" client_key_path = "/etc/temporal/certs/client.key" --- # Custom headers for production [profile.prod.grpc_meta] environment = "production" service-version = "v1.2.3" ``` You can create a Temporal Client using a profile from the configuration file as follows. In this example, you load the `default` profile for local development: ```csharp title="LoadFromFile.cs" {27-30} using Temporalio.Client; using Temporalio.Client.EnvConfig; namespace TemporalioSamples.EnvConfig; /// /// Sample demonstrating loading the default environment configuration profile /// from a TOML file. /// public static class LoadFromFile { public static async Task RunAsync() { Console.WriteLine("--- Loading default profile from config.toml ---"); try { // For this sample to be self-contained, we explicitly provide the path to // the config.toml file included in this directory. // By default though, the config.toml file will be loaded from // ~/.config/temporalio/temporal.toml (or the equivalent standard config directory on your OS). var configFile = Path.Combine(Directory.GetCurrentDirectory(), "config.toml"); // LoadClientConnectOptions is a helper that loads a profile and prepares // the config for TemporalClient.ConnectAsync. By default, it loads the // "default" profile. var connectOptions = ClientEnvConfig.LoadClientConnectOptions(new ClientEnvConfig.ProfileLoadOptions { ConfigSource = DataSource.FromPath(configFile), }); Console.WriteLine($"Loaded 'default' profile from {configFile}."); Console.WriteLine($" Address: {connectOptions.TargetHost}"); Console.WriteLine($" Namespace: {connectOptions.Namespace}"); if (connectOptions.RpcMetadata?.Count > 0) { Console.WriteLine($" gRPC Metadata: {string.Join(", ", connectOptions.RpcMetadata.Select(kv => $"{kv.Key}={kv.Value}"))}"); } Console.WriteLine("\nAttempting to connect to client..."); var client = await TemporalClient.ConnectAsync(connectOptions); Console.WriteLine("✅ Client connected successfully!"); // Test the connection by checking the service var sysInfo = await client.Connection.WorkflowService.GetSystemInfoAsync(new()); Console.WriteLine("✅ Successfully verified connection to Temporal server!\n{0}", sysInfo); } catch (Exception ex) when (ex is not OperationCanceledException) { Console.WriteLine($"❌ Failed to connect: {ex.Message}"); } } } ``` Use the `EnvConfig` package to set connection options for the Temporal Client using environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](/references/client-environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. Set the following environment variables before running your .NET application. Replace the placeholder values with your actual configuration. Since this is for a local development Temporal Service, the values connect to `localhost:7233` and the `default` Namespace. You may omit these variables entirely since they're the defaults. ```bash export TEMPORAL_NAMESPACE="default" export TEMPORAL_ADDRESS="localhost:7233" ``` After setting the environment variables, use the following code to create the Temporal Client: ```csharp using Temporalio.Client; using Temporalio.Client.EnvConfig; namespace TemporalioSamples.EnvConfig; /// /// Sample demonstrating loading the default environment configuration profile /// from a TOML file. /// public static class LoadFromFile { public static async Task RunAsync() { try { var connectOptions = ClientEnvConfig.LoadClientConnectOptions(); Console.WriteLine("\nAttempting to connect to client..."); var client = await TemporalClient.ConnectAsync(connectOptions); Console.WriteLine("✅ Client connected successfully!"); } catch (Exception ex) when (ex is not OperationCanceledException) { Console.WriteLine($"❌ Failed to connect: {ex.Message}"); } } } ``` If you don't want to use environment variables or a configuration file, you can specify connection options directly in code. This is convenient for local development and testing. You can also load a base configuration from environment variables or a configuration file, and then override specific options in code. ```csharp using System; using System.Threading.Tasks; using Temporalio.Client; namespace TemporalioSamples.Manual { public static class ManualConnect { public static async Task RunAsync() { Console.WriteLine("--- Connecting manually to Temporal ---"); var client = await TemporalClient.ConnectAsync(new TemporalClientConnectOptions { TargetHost = "localhost:7233", Namespace = "default", }); Console.WriteLine("✅ Connected to local Temporal service!"); } } } ``` ## Connect to Temporal Cloud {#connect-to-temporal-cloud} You can connect to Temporal Cloud using either an [API key](/cloud/api-keys) or through mTLS. Connection to Temporal Cloud or any secured Temporal Service requires additional connection options compared to connecting to an unsecured local development instance: - Your credentials for authentication. - If you are using an API key, provide the API key value. - If you are using mTLS, provide the mTLS CA certificate and mTLS private key. - Your _Namespace and Account ID_ combination, which follows the format `.`. - The _endpoint_ may vary. The most common endpoint used is the gRPC regional endpoint, which follows the format: `..api.temporal.io:7233`. - For Namespaces with High Availability features with API key authentication enabled, use the gRPC Namespace endpoint: `..tmprl.cloud:7233`. This allows automated failover without needing to switch endpoints. You can find the Namespace and Account ID, as well as the endpoint, on the Namespaces tab: ![The Namespace and Account ID combination on the left, and the regional endpoint on the right](/img/cloud/apikeys/namespaces-and-regional-endpoints.png) You can provide these connection options using environment variables, a configuration file, or directly in code. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. For a list of all available configuration options you can set in the TOML file, refer to [Environment Configuration](/references/client-environment-configuration). You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the path to the configuration file, the SDK looks for it at the default path `~/.config/temporalio/temporal.toml`. :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines a `cloud` profile with the necessary connection options to connect to Temporal Cloud via an API key: ```toml --- # Cloud profile for Temporal Cloud [profile.cloud] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" ``` If you want to use mTLS authentication instead of an API key, replace the `api_key` field with your mTLS certificate and private key: ```toml --- # Cloud profile for Temporal Cloud [profile.cloud] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" tls_client_cert_data = "your-tls-client-cert-data" tls_client_key_path = "your-tls-client-key-path" ``` With the connections options defined in the configuration file, use the `ClientEnvConfig.LoadClientConnectOptions` method to create a Temporal Client using the `staging` profile as follows. After loading the profile, you can also programmatically override specific connection options before creating the client. ```csharp title="LoadProfile.cs" {25. 41} using Temporalio.Client; using Temporalio.Client.EnvConfig; namespace TemporalioSamples.EnvConfig; /// /// Sample demonstrating loading a named environment configuration profile and /// programmatically overriding its values. /// public static class LoadProfile { public static async Task RunAsync() { Console.WriteLine("--- Loading 'staging' profile with programmatic overrides ---"); try { var configFile = Path.Combine(Directory.GetCurrentDirectory(), "config.toml"); var profileName = "staging"; Console.WriteLine("The 'staging' profile in config.toml has an incorrect address (localhost:9999)."); Console.WriteLine("We'll programmatically override it to the correct address."); // Load the 'staging' profile var connectOptions = ClientEnvConfig.LoadClientConnectOptions(new ClientEnvConfig.ProfileLoadOptions { Profile = profileName, ConfigSource = DataSource.FromPath(configFile), }); // Override the target host to the correct address. // This is the recommended way to override configuration values. connectOptions.TargetHost = "localhost:7233"; Console.WriteLine($"\nLoaded '{profileName}' profile from {configFile} with overrides."); Console.WriteLine($" Address: {connectOptions.TargetHost} (overridden from localhost:9999)"); Console.WriteLine($" Namespace: {connectOptions.Namespace}"); Console.WriteLine("\nAttempting to connect to client..."); var client = await TemporalClient.ConnectAsync(connectOptions); Console.WriteLine("✅ Client connected successfully!"); // Test the connection by checking the service var sysInfo = await client.Connection.WorkflowService.GetSystemInfoAsync(new()); Console.WriteLine("✅ Successfully verified connection to Temporal server!\n{0}", sysInfo); } catch (Exception ex) when (ex is not OperationCanceledException) { Console.WriteLine($"❌ Failed to connect: {ex.Message}"); } } } ``` The following environment variables are required to connect to Temporal Cloud: - `TEMPORAL_NAMESPACE`: Your Namespace and Account ID combination in the format `.`. - `TEMPORAL_ADDRESS`: The gRPC endpoint for your Temporal Cloud Namespace. - `TEMPORAL_API_KEY`: Your API key value. Required if you are using API key authentication. - `TEMPORAL_TLS_CLIENT_CERT_DATA` or `TEMPORAL_TLS_CLIENT_CERT_PATH`: Your mTLS client certificate data or file path. Required if you are using mTLS authentication. - `TEMPORAL_TLS_CLIENT_KEY_DATA` or `TEMPORAL_TLS_CLIENT_KEY_PATH`: Your mTLS client private key data or file path. Required if you are using mTLS authentication. Ensure these environment variables exist in your environment before running your .NET application. Import the `EnvConfig` package to set connection options for the Temporal Client using environment variables. The `MustLoadDefaultClientOptions` function will automatically load all environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](/references/client-environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. ```csharp {16,20} using Temporalio.Client; using Temporalio.Client.EnvConfig; namespace TemporalioSamples.EnvConfig; /// /// Sample demonstrating loading the default environment configuration profile /// from a TOML file. /// public static class LoadFromFile { public static async Task RunAsync() { try { var connectOptions = ClientEnvConfig.LoadClientConnectOptions(); Console.WriteLine("\nAttempting to connect to client..."); var client = await TemporalClient.ConnectAsync(connectOptions); Console.WriteLine("✅ Client connected successfully!"); } catch (Exception ex) when (ex is not OperationCanceledException) { Console.WriteLine($"❌ Failed to connect: {ex.Message}"); } } } ``` You can also provide connections options in your Go code directly. To create an initial connection, provide the Namespace and API key values to the ` TemporalClient.ConnectAsync` method. ```csharp var myClient = TemporalClient.ConnectAsync(new() { Namespace = ".", ApiKey = "", Tls = new(), }); ``` To update an API key, update the value of `ApiKey` on the existing client connection: ```csharp myClient.Connection.ApiKey = myKeyUpdated; ``` ## Start a Workflow {#start-workflow} **How to start a Workflow using the Temporal .NET SDK** [Workflow Execution](/workflow-execution) semantics rely on several parameters—that is, to start a Workflow Execution you must supply a Task Queue that will be used for the Tasks (one that a Worker is polling), the Workflow Type, language-specific contextual data, and Workflow Function parameters. A request to spawn a Workflow Execution causes the Temporal Service to create the first Event ([WorkflowExecutionStarted](/references/events#workflowexecutionstarted)) in the Workflow Execution Event History. The Temporal Service then creates the first Workflow Task, resulting in the first [WorkflowTaskScheduled](/references/events#workflowtaskscheduled) Event. To start a Workflow Execution in .NET, use either the `StartWorkflowAsync()` or `ExecuteWorkflowAsync()` methods in the Client. You must set a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id) and [Task Queue](/task-queue) in the `WorkflowOptions` given to the method. ```csharp var result = await client.ExecuteWorkflowAsync( (MyWorkflow wf) => wf.RunAsync(), new(id: "my-workflow-id", taskQueue: "my-task-queue"); Console.WriteLine("Result: {0}", result); ``` ## Get Workflow results {#get-workflow-results} **How to get the results of a Workflow Execution using the Temporal .NET SDK** If the call to start a Workflow Execution is successful, you will gain access to the Workflow Execution's Run Id. The Workflow Id, Run Id, and Namespace may be used to uniquely identify a Workflow Execution in the system and get its result. It's possible to both block progress on the result (synchronous execution) or get the result at some other point in time (asynchronous execution). In the Temporal Platform, it's also acceptable to use Queries as the preferred method for accessing the state and results of Workflow Executions. Use `StartWorkflowAsync()` or `GetWorkflowHandle()` to return a Workflow handle. Then use the `GetResultAsync()` method to await on the result of the Workflow. To get a handle for an existing Workflow by its Id, you can use `GetWorkflowHandle()`. Then use [`DescribeAsync()`](https://dotnet.temporal.io/api/Temporalio.Client.WorkflowHandle.html#Temporalio_Client_WorkflowHandle_DescribeAsync_Temporalio_Client_WorkflowDescribeOptions_) to get the current status of the Workflow. If the Workflow does not exist, this call fails. ```csharp var handle = client.GetWorkflowHandle("my-workflow-id"); var result = await handle.GetResultAsync(); Console.WriteLine("Result: {0}", result); ``` --- ## Temporal Nexus - .NET SDK Feature Guide :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal .NET SDK support for Nexus is at [Pre-release](/evaluate/development-production-features/release-stages#pre-release). All APIs are experimental and may be subject to backwards-incompatible changes. ::: Use [Temporal Nexus](/evaluate/nexus) to connect Temporal Applications within and across Namespaces using a Nexus Endpoint, a Nexus Service contract, and Nexus Operations. This page shows how to do the following: - [Run a development Temporal Service with Nexus enabled](#run-the-temporal-nexus-development-server) - [Create caller and handler Namespaces](#create-caller-handler-namespaces) - [Create a Nexus Endpoint to route requests from caller to handler](#create-nexus-endpoint) - [Define the Nexus Service contract](#define-nexus-service-contract) - [Develop a Nexus Service and Operation handlers](#develop-nexus-service-operation-handlers) - [Develop a caller Workflow that uses a Nexus Service](#develop-caller-workflow-nexus-service) - [Make Nexus calls across Namespaces with a development Server](#nexus-calls-across-namespaces-dev-server) - [Make Nexus calls across Namespaces in Temporal Cloud](#nexus-calls-across-namespaces-temporal-cloud) :::note This documentation uses source code derived from the [.NET Nexus sample](https://github.com/temporalio/samples-dotnet/tree/main/src/NexusSimple). ::: ## Run the Temporal Development Server with Nexus enabled {#run-the-temporal-nexus-development-server} Prerequisites: - [Install the latest Temporal CLI](https://learn.temporal.io/getting_started/dotnet/dev_environment/#set-up-a-local-temporal-service-for-development-with-temporal-cli) (v1.3.0 or higher recommended) - [Install the latest Temporal .NET SDK](https://learn.temporal.io/getting_started/dotnet/dev_environment/#install-the-temporal-net-sdk) (v1.9.0 or higher) The first step in working with Temporal Nexus involves starting a Temporal server with Nexus enabled. ``` temporal server start-dev ``` This command automatically starts the Temporal development server with the Web UI, and creates the `default` Namespace. It uses an in-memory database, so do not use it for real use cases. The Temporal Web UI should now be accessible at [http://localhost:8233](http://localhost:8233), and the Temporal Server should now be available for client connections on `localhost:7233`. ## Create caller and handler Namespaces {#create-caller-handler-namespaces} Before setting up Nexus endpoints, create separate Namespaces for the caller and handler. ``` temporal operator namespace create --namespace my-target-namespace temporal operator namespace create --namespace my-caller-namespace ``` `my-target-namespace` will contain the Nexus Operation handler, and we will use a Workflow in `my-caller-namespace` to call that Operation handler. We use different namespaces to demonstrate cross-Namespace Nexus calls. ## Create a Nexus Endpoint to route requests from caller to handler {#create-nexus-endpoint} After establishing caller and handler Namespaces, the next step is to create a Nexus Endpoint to route requests. ``` temporal operator nexus endpoint create \ --name my-nexus-endpoint-name \ --target-namespace my-target-namespace \ --target-task-queue my-handler-task-queue ``` You can also use the Web UI to create the Namespaces and Nexus endpoint. ## Define the Nexus Service contract {#define-nexus-service-contract} Defining a clear contract for the Nexus Service is crucial for smooth communication. In this example, there is a service package that describes the Service and Operation names along with input/output types for caller Workflows to use the Nexus Endpoint. Each [Temporal SDK includes and uses a default Data Converter](https://docs.temporal.io/dataconversion). The default data converter encodes payloads in the following order: Null, Byte array, Protobuf JSON, and JSON. In a polyglot environment, that is where more than one language and SDK is being used to develop a Temporal solution, Protobuf and JSON are common choices. This example uses .NET classes serialized into JSON. [NexusSimple/IHelloService.cs](https://github.com/temporalio/samples-dotnet/blob/main/src/NexusSimple/IHelloService.cs) ```csharp using NexusRpc; [NexusService] public interface IHelloService { static readonly string EndpointName = "nexus-simple-endpoint"; [NexusOperation] EchoOutput Echo(EchoInput input); [NexusOperation] HelloOutput SayHello(HelloInput input); public record EchoInput(string Message); public record EchoOutput(string Message); public record HelloInput(string Name, HelloLanguage Language); public record HelloOutput(string Message); public enum HelloLanguage { En, Fr, De, Es, Tr, } } ``` ## Develop a Nexus Service and Operation handlers {#develop-nexus-service-operation-handlers} Nexus Operation handlers are typically defined in the same Worker as the underlying Temporal primitives they abstract. Operation handlers can decide if a given Nexus Operation will be synchronous or asynchronous. They can execute arbitrary code, and invoke underlying Temporal primitives such as a Workflow, Query, Signal, or Update. The `Temporalio.Nexus` namespace has utilities to help create Nexus Operations: - `NexusOperationExecutionContext.Current.TemporalClient` \- Get the Temporal Client that the Worker was initialized with for synchronous handlers backed by Temporal primitives such as Signals and Queries - `WorkflowRunOperationHandler.FromHandleFactory` \- Run a Workflow as an asynchronous Nexus Operation This example starts with a sync Operation handler example using the `OperationHandler.Sync` method, and then shows how to create an async Operation handler that uses `WorkflowRunOperationHandler.FromHandleFactory` to start a handler Workflow from a Nexus Operation. ### Develop a Synchronous Nexus Operation handler The `OperationHandler.Sync` method is for exposing simple RPC handlers. Its handler function can access an SDK client that can be used for signaling, querying, and listing Workflows. However, implementations are free to make arbitrary calls to other services or databases, or perform computations such as this one: [NexusSimple/Handler/HelloService.cs](https://github.com/temporalio/samples-dotnet/blob/main/src/NexusSimple/Handler/HelloService.cs) ```csharp using NexusRpc.Handlers; [NexusServiceHandler(typeof(IHelloService))] public class HelloService { [NexusOperationHandler] public IOperationHandler Echo() => // This Nexus service operation is a simple sync handler OperationHandler.Sync( (ctx, input) => new(input.Message)); // ... } ``` ### Develop an Asynchronous Nexus Operation handler to start a Workflow Use the `WorkflowRunOperationHandler.FromHandleFactory` method, which is the easiest way to expose a Workflow as an operation. [NexusSimple/Handler/HelloService.cs](https://github.com/temporalio/samples-dotnet/blob/main/src/NexusSimple/Handler/HelloService.cs) ```csharp using NexusRpc.Handlers; using Temporalio.Nexus; [NexusServiceHandler(typeof(IHelloService))] public class HelloService { // ... [NexusOperationHandler] public IOperationHandler SayHello() => // This Nexus service operation is backed by a workflow run WorkflowRunOperationHandler.FromHandleFactory( (WorkflowRunOperationContext context, IHelloService.HelloInput input) => context.StartWorkflowAsync( (HelloHandlerWorkflow wf) => wf.RunAsync(input), // Workflow IDs should typically be business meaningful IDs and are used to // dedupe workflow starts. For this example, we're using the request ID // allocated by Temporal when the caller workflow schedules the operation, // this ID is guaranteed to be stable across retries of this operation. new() { Id = context.HandlerContext.RequestId })); } ``` Workflow IDs should typically be business-meaningful IDs and are used to dedupe Workflow starts. In general, the ID should be passed in the Operation input as part of the Nexus Service contract. :::tip RESOURCES [Attach multiple Nexus callers to a handler Workflow](/nexus/operations#attaching-multiple-nexus-callers) with a Conflict-Policy of Use-Existing. ::: #### Map a Nexus Operation input to multiple Workflow arguments A Nexus Operation can only take one input parameter. If you want a Nexus Operation to start a Workflow that takes multiple arguments, simply pass in different arguments using `RunAsync`. [NexusMultiArg/Handler/HelloService.cs](https://github.com/temporalio/samples-dotnet/blob/main/src/NexusMultiArg/Handler/HelloService.cs) ```csharp [NexusServiceHandler(typeof(IHelloService))] public class HelloService { [NexusOperationHandler] public IOperationHandler SayHello() => // This Nexus service operation is backed by a workflow run. For this sample, we are // altering the parameters to the workflow (in this case expanding to two parameters). WorkflowRunOperationHandler.FromHandleFactory( (WorkflowRunOperationContext context, IHelloService.HelloInput input) => context.StartWorkflowAsync( (HelloHandlerWorkflow wf) => wf.RunAsync(input.Language, input.Name), // Workflow IDs should typically be business meaningful IDs and are used to // dedupe workflow starts. For this example, we're using the request ID // allocated by Temporal when the caller workflow schedules the operation, // this ID is guaranteed to be stable across retries of this operation. new() { Id = context.HandlerContext.RequestId })); } ``` ### Register a Nexus Service in a Worker After developing an asynchronous Nexus Operation handler to start a Workflow, the next step is to register a Nexus Service in a Worker. [NexusSimple/Program.cs](https://github.com/temporalio/samples-dotnet/blob/main/src/NexusSimple/Program.cs) ```csharp async Task RunHandlerWorkerAsync() { // Run worker until cancelled logger.LogInformation("Running handler worker"); using var worker = new TemporalWorker( await ConnectClientAsync("nexus-simple-handler-namespace"), new TemporalWorkerOptions(taskQueue: "nexus-simple-handler-sample"). AddNexusService(new HelloService()). AddWorkflow()); try { await worker.ExecuteAsync(tokenSource.Token); } catch (OperationCanceledException) { logger.LogInformation("Handler worker cancelled"); } } ``` ## Develop a caller Workflow that uses the Nexus Service {#develop-caller-workflow-nexus-service} Import the Service API package that has the necessary service and operation names and input/output types to execute a Nexus Operation from the caller Workflow: [NexusSimple/Caller/EchoCallerWorkflow.workflow.cs](https://github.com/temporalio/samples-dotnet/blob/main/src/NexusSimple/Caller/EchoCallerWorkflow.workflow.cs) ```csharp using Temporalio.Workflows; [Workflow] public class EchoCallerWorkflow { [WorkflowRun] public async Task RunAsync(string message) { var output = await Workflow.CreateNexusClient(IHelloService.EndpointName). ExecuteNexusOperationAsync(svc => svc.Echo(new(message))); return output.Message; } } ``` [NexusSimple/Caller/HelloCallerWorkflow.workflow.cs](https://github.com/temporalio/samples-dotnet/blob/main/src/NexusSimple/Caller/HelloCallerWorkflow.workflow.cs) ```csharp using Temporalio.Workflows; [Workflow] public class HelloCallerWorkflow { [WorkflowRun] public async Task RunAsync(string name, IHelloService.HelloLanguage language) { var output = await Workflow.CreateNexusClient(IHelloService.EndpointName). ExecuteNexusOperationAsync(svc => svc.SayHello(new(name, language))); return output.Message; } } ``` ### Register the caller Workflow in a Worker After developing the caller Workflow, the next step is to register it with a Worker. [NexusSimple/Program.cs](https://github.com/temporalio/samples-dotnet/blob/main/src/NexusSimple/Program.cs) ```csharp async Task RunCallerWorkerAsync() { // Run worker until cancelled logger.LogInformation("Running caller worker"); using var worker = new TemporalWorker( await ConnectClientAsync("nexus-simple-caller-namespace"), new TemporalWorkerOptions(taskQueue: "nexus-simple-caller-sample"). AddWorkflow(). AddWorkflow()); try { await worker.ExecuteAsync(tokenSource.Token); } catch (OperationCanceledException) { logger.LogInformation("Caller worker cancelled"); } } ``` ### Develop a starter to start the caller Workflow To initiate the caller Workflow, a starter program is used. [NexusSimple/Program.cs](https://github.com/temporalio/samples-dotnet/blob/main/src/NexusSimple/Program.cs) ```csharp async Task ExecuteCallerWorkflowAsync() { logger.LogInformation("Executing caller echo workflow"); var client = await ConnectClientAsync("nexus-simple-caller-namespace"); var result1 = await client.ExecuteWorkflowAsync( (EchoCallerWorkflow wf) => wf.RunAsync("Nexus Echo 👋"), new(id: "nexus-simple-echo-id", taskQueue: "nexus-simple-caller-sample")); logger.LogInformation("Workflow result: {Result}", result1); logger.LogInformation("Executing caller hello workflow"); var result2 = await client.ExecuteWorkflowAsync( (HelloCallerWorkflow wf) => wf.RunAsync("Temporal", IHelloService.HelloLanguage.Es), new(id: "nexus-simple-hello-id", taskQueue: "nexus-simple-caller-sample")); logger.LogInformation("Workflow result: {Result}", result2); } ``` ## Make Nexus calls across Namespaces with a development Server {#nexus-calls-across-namespaces-dev-server} Follow the steps below to run the Nexus handler Worker, the Nexus caller Worker, and the starter app. ### Run Workers connected to a local development server Run the Nexus handler Worker: ```bash dotnet run handler-worker ``` In another terminal window, run the Nexus caller Worker: ```bash dotnet run caller-worker ``` ### Start a caller Workflow With the Workers running, the final step in the local development process is to start a caller Workflow. Run the starter: ```bash dotnet run caller-workflow ``` This will show the two workflows started and their results. ### Canceling a Nexus Operation {#canceling-a-nexus-operation} To cancel a Nexus Operation from within a Workflow, cancel the cancellation token passed to the operation call. Only asynchronous operations can be canceled in Nexus, since cancellation is sent using an operation token. The Workflow or other resources backing the operation may choose to ignore the cancellation request. If ignored, the operation may enter a terminal state. Once the caller Workflow completes, the caller's Nexus Machinery will not make any further attempts to cancel operations that are still running. It's okay to leave operations running in some use cases. To ensure cancellations are delivered, wait for all pending operations to finish before exiting the Workflow. ## Make Nexus calls across Namespaces in Temporal Cloud {#nexus-calls-across-namespaces-temporal-cloud} This section assumes you are already familiar with how to connect a Worker to Temporal Cloud. The `tcld` CLI is used to create Namespaces and the Nexus Endpoint, and mTLS client certificates will be used to securely connect the caller and handler Workers to their respective Temporal Cloud Namespaces. ### Install the latest `tcld` CLI and generate certificates To install the latest version of the `tcld` CLI, run the following command (on MacOS): ``` brew install temporalio/brew/tcld ``` If you don't already have certificates, you can generate them for mTLS Worker authentication using the command below: ``` tcld gen ca --org $YOUR_ORG_NAME --validity-period 1y --ca-cert ca.pem --ca-key ca.key ``` These certificates will be valid for one year. ### Create caller and handler Namespaces Before deploying to Temporal Cloud, ensure that the appropriate Namespaces are created for both the caller and handler. If you already have these Namespaces, you don't need to do this. ``` tcld login tcld namespace create \ --namespace \ --region us-west-2 \ --ca-certificate-file 'path/to/your/ca.pem' \ --retention-days 1 tcld namespace create \ --namespace \ --region us-west-2 \ --ca-certificate-file 'path/to/your/ca.pem' \ --retention-days 1 ``` Alternatively, you can create Namespaces through the UI: [https://cloud.temporal.io/Namespaces](https://cloud.temporal.io/Namespaces). ### Create a Nexus Endpoint to route requests from caller to handler To create a Nexus Endpoint you must have a Developer account role or higher, and have NamespaceAdmin permission on the `--target-namespace`. ``` tcld nexus endpoint create \ --name \ --target-task-queue my-handler-task-queue \ --target-namespace \ --allow-namespace \ --description-file endpoint_description.md ``` The `--allow-namespace` is used to build an Endpoint allowlist of caller Namespaces that can use the Nexus Endpoint, as described in Runtime Access Control. Alternatively, you can create a Nexus Endpoint through the UI: [https://cloud.temporal.io/nexus](https://cloud.temporal.io/nexus). ## Observability ### Web UI A synchronous Nexus Operation will surface in the caller Workflow as follows, with just `NexusOperationScheduled` and `NexusOperationCompleted` events in the caller's Workflow history: An asynchronous Nexus Operation will surface in the caller Workflow as follows, with `NexusOperationScheduled`, `NexusOperationStarted`, and `NexusOperationCompleted`, in the caller's Workflow history: ### Temporal CLI Use the `workflow describe` command to show pending Nexus Operations in the caller Workflow and any attached callbacks on the handler Workflow: ``` temporal workflow describe -w ``` Nexus events are included in the caller's Workflow history: ``` temporal workflow show -w ``` For **asynchronous Nexus Operations** the following are reported in the caller's history: - `NexusOperationScheduled` - `NexusOperationStarted` - `NexusOperationCompleted` For **synchronous Nexus Operations** the following are reported in the caller's history: - `NexusOperationScheduled` - `NexusOperationCompleted` :::note `NexusOperationStarted` isn't reported in the caller's history for synchronous operations. ::: ## Learn more - Read the high-level description of the [Temporal Nexus feature](/evaluate/nexus) and watch the [Nexus keynote and demo](https://youtu.be/qqc2vsv1mrU?feature=shared&t=2082). - Learn how Nexus works in the [Nexus deep dive talk](https://www.youtube.com/watch?v=izR9dQ_eIe4) and [Encyclopedia](/nexus). - Deploy Nexus Endpoints in production with [Temporal Cloud](/cloud/nexus). --- ## Testing - .NET SDK The .NET test-suite feature guide describes the frameworks that facilitate Workflow and integration testing. In the context of Temporal, you can create these types of automated tests: - **End-to-end:** Running a Temporal Server and Worker with all its Workflows and Activities; starting and interacting with Workflows from a Client. - **Integration:** Anything between end-to-end and unit testing. - Running Activities with mocked Context and other SDK imports (and usually network requests). - Running Workers with mock Activities, and using a Client to start Workflows. - Running Workflows with mocked SDK imports. - **Unit:** Running a piece of Workflow or Activity code and mocking any code it calls. We generally recommend writing the majority of your tests as integration tests. Because the test server supports skipping time, use the test server for both end-to-end and integration tests with Workers. ## Test frameworks {#test-frameworks} **Compatible testing frameworks** The .NET SDK is compatible with any testing framework and does not have a specific recommendation. Most .NET SDK samples use [xUnit](https://xunit.net/). ## Testing Workflows {#testing-workflows} **How to test Workflow Definitions using the Temporal .NET SDK** Workflow testing can be done in an integration-test fashion against a real server, however it is hard to simulate timeouts and other long time-based code. Using the time-skipping Workflow test environment can help there. ### Testing Workflows with standard server A non-time-skipping `Temporalio.Testing.WorkflowEnvironment` can be started via `StartLocalAsync` which supports all standard Temporal features. It is actually the real Temporal dev server packaged in the Temporal CLI, lazily downloaded on first use, and run as a sub-process in the background. Assuming tests properly use separate Task Queues, the same server can and should be reused across tests. Here's a simple example of a Workflow: ```csharp [Workflow] public class SayHelloWorkflow { [WorkflowRun] public async Task RunAsync(string name) { return $"Hello, {name}!"; } } ``` Here's how a test of that Workflow may appear in xUnit: ```csharp using Temporalio.Testing; using Temporalio.Worker; [Fact] public async Task SayHelloWorkflow_SimpleRun_Succeeds() { // Start local dev server await using var env = await WorkflowEnvironment.StartLocalAsync(); // Create a worker using var worker = new TemporalWorker( env.Client, new TemporalWorkerOptions($"task-queue-{Guid.NewGuid()}"). AddWorkflow()); // Run the worker only for the life of the code within await worker.ExecuteAsync(async () => { // Execute the workflow and confirm the result var result = await env.Client.ExecuteWorkflowAsync( (SayHelloWorkflow wf) => wf.RunAsync("Temporal"), new(id: $"wf-{Guid.NewGuid()}", taskQueue: worker.Options.TaskQueue!)); Assert.Equal("Hello, Temporal!", result); }); } ``` While this is just a demonstration, a local server is often used as a fixture across many tests. ### Testing Workflows with time skipping Sometimes there is a need to test Workflows that run a long time or to test that timeouts occur. A time-skipping `Temporalio.Testing.WorkflowEnvironment` can be started via `StartTimeSkippingAsync` which is a reimplementation of the Temporal server with special time skipping capabilities. Like `StartLocalAsync`, this also lazily downloads the process to run when first called. Note, unlike `StartLocalAsync`, this class is not thread safe nor safe for use with independent tests. It can be technically be reused, but only for one test at a time because time skipping is locked/unlocked at the environment level. Developers are encouraged to run it per test needed. #### Automatic time skipping Here's a simple example of a Workflow that waits a day: ```csharp [Workflow] public class WaitADayWorkflow { [WorkflowRun] public async Task RunAsync() { await Workflow.DelayAsync(TimeSpan.FromDays(1)); return "all done"; } } ``` A regular integration test of this Workflow on a normal server would be way too slow. However, the time-skipping server automatically skips to the next event when we wait on the result. Here's a test for that Workflow in xUnit: ```csharp using Temporalio.Testing; using Temporalio.Worker; [Fact] public async Task WaitADayWorkflow_SimpleRun_Succeeds() { // Start time-skipping test server await using var env = await WorkflowEnvironment.StartTimeSkippingAsync(); // Create a worker using var worker = new TemporalWorker( env.Client, new TemporalWorkerOptions($"task-queue-{Guid.NewGuid()}"). AddWorkflow()); // Run the worker only for the life of the code within await worker.ExecuteAsync(async () => { // Execute the workflow and confirm the result var result = await env.Client.ExecuteWorkflowAsync( (WaitADayWorkflow wf) => wf.RunAsync(), new(id: $"wf-{Guid.NewGuid()}", taskQueue: worker.Options.TaskQueue!)); Assert.Equal("all done", result); }); } ``` This test will run almost instantly. This is because by calling `ExecuteWorkflowAsync` on our client, we are actually calling `StartWorkflowAsync` + `GetResultAsync`, and `GetResultAsync` automatically skips time as much as it can (basically until the end of the workflow or until an activity is run). To disable automatic time-skipping while waiting for a workflow result, run code as a lambda passed to `env.WithAutoTimeSkippingDisabled` or `env.WithAutoTimeSkippingDisabledAsync`. #### Manual time skipping Until a Workflow is waited on, all time skipping in the time-skipping environment is done manually via `WorkflowEnvironment.DelayAsync`. Here's a Workflow that waits for a Signal or times out: ```csharp [Workflow] public class SignalWorkflow { private bool signalReceived = false; [WorkflowRun] public async Task RunAsync() { // Wait for signal or timeout in 45 seconds if (Workflow.WaitConditionAsync(() => signalReceived, TimeSpan.FromSeconds(45))) { return "got signal"; } return "got timeout"; } [WorkflowSignal] public async Task SomeSignalAsync() => signalReceived = true; } ``` To test a normal Signal in xUnit, you might: ```csharp using Temporalio.Testing; using Temporalio.Worker; [Fact] public async Task SignalWorkflow_SendSignal_HasExpectedResult() { await using var env = await WorkflowEnvironment.StartTimeSkippingAsync(); using var worker = new TemporalWorker( env.Client, new TemporalWorkerOptions($"task-queue-{Guid.NewGuid()}"). AddWorkflow()); await worker.ExecuteAsync(async () => { var handle = await env.Client.StartWorkflowAsync( (SignalWorkflow wf) => wf.RunAsync(), new(id: $"wf-{Guid.NewGuid()}", taskQueue: worker.Options.TaskQueue!)); await handle.SignalAsync(wf => wf.SomeSignalAsync()); Assert.Equal("got signal", await handle.GetResultAsync()); }); } ``` But how would you test the timeout part? Like so: ```csharp using Temporalio.Testing; using Temporalio.Worker; [Fact] public async Task SignalWorkflow_SignalTimeout_HasExpectedResult() { await using var env = await WorkflowEnvironment.StartTimeSkippingAsync(); using var worker = new TemporalWorker( env.Client, new TemporalWorkerOptions($"task-queue-{Guid.NewGuid()}"). AddWorkflow()); await worker.ExecuteAsync(async () => { var handle = await env.Client.StartWorkflowAsync( (SignalWorkflow wf) => wf.RunAsync(), new(id: $"wf-{Guid.NewGuid()}", taskQueue: worker.Options.TaskQueue!)); await env.DelayAsync(TimeSpan.FromSeconds(50)); Assert.Equal("got timeout", await handle.GetResultAsync()); }); } ``` ### Mocking Activities When testing Workflows, often you don't want to actually run the Activities. Activities are just methods with the `[Activity]` attribute. Simply write different/empty/fake/asserting ones and pass those to the Worker to have different activities called during the test. ## Testing Activities {#test-activities} **How to test Activity Definitions using the Temporal .NET SDK** Unit testing an Activity or any code that could run in an Activity is done via the `Temporalio.Testing.ActivityEnvironment` class. Simply instantiate the class, and any code inside `RunAsync` will be invoked inside the activity context. The following important members are available on the environment to affect the activity context: - `Info` - Activity info, defaulted to a basic set of values. - `Logger` - Activity logger, defaulted to a null logger. - `Cancel(CancelReason)` - Helper to set the reason and cancel the source. - `CancelReason` - Cancel reason. - `CancellationTokenSource` - Token source for issuing cancellation. - `Heartbeater` - Callback invoked each heartbeat. - `WorkerShutdownTokenSource` - Token source for issuing Worker shutdown. - `PayloadConverter` - Defaulted to default payload converter. ## Replay test {#replay} **How to do a Replay test using the Temporal .NET SDK** Given a Workflow's history, it can be replayed locally to check for things like non-determinism errors. For example, assuming the `history` parameter below is given a JSON string of history exported from the CLI or web UI, the following method will replay it: ```csharp using Temporalio; using Temporalio.Worker; public static async Task ReplayFromJsonAsync(string historyJson) { var replayer = new WorkflowReplayer( new WorkflowReplayerOptions().AddWorkflow()); await replayer.ReplayWorkflowAsync(WorkflowHistory.FromJson("my-workflow-id", historyJson)); } ``` If there is a non-determinism, this will throw an exception. Workflow history can be loaded from more than just JSON. It can be fetched individually from a Workflow handle, or even in a list. For example, the following code will check that all Workflow histories for a certain Workflow type (i.e. workflow class) are safe with the current Workflow code. ```csharp using Temporalio; using Temporalio.Client; using Temporalio.Worker; public static async Task CheckPastHistoriesAsync(ITemporalClient client) { var replayer = new WorkflowReplayer( new WorkflowReplayerOptions().AddWorkflow()); var listIter = client.ListWorkflowHistoriesAsync("WorkflowType = 'SayHello'"); await foreach (var result in replayer.ReplayWorkflowsAsync(listIter)) { if (result.ReplayFailure != null) { ExceptionDispatchInfo.Throw(result.ReplayFailure); } } } ``` --- ## Versioning - .NET SDK Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: - [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). The Worker Versioning feature allows you to tag your Workers and programmatically roll them out in versioned deployments, so that old Workers can run old code paths and new Workers can run new code paths. - [Versioning with Patching](#patching). This method works by adding branches to your code tied to specific revisions. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. ## Worker Versioning Temporal's [Worker Versioning](/production-deployment/worker-deployments/worker-versioning) feature allows you to tag your Workers and programmatically roll them out in Deployment Versions, so that old Workers can run old code paths and new Workers can run new code paths. This way, you can pin your Workflows to specific revisions, avoiding the need for patching. ## Versioning with Patching {#patching} ### Adding a patch A Patch defines a logical branch in a Workflow for a specific change, similar to a feature flag. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. When you want to make substantive code changes that may affect existing Workflow Executions, create a patch. Note that there's no need to patch [Pinned Workflows](/worker-versioning). Suppose you have an initial Workflow version called `PrePatchActivity`: ```csharp [Workflow] public class MyWorkflow { [WorkflowRun] public async Task RunAsync() { this.result = await Workflow.ExecuteActivityAsync( (MyActivities a) => a.PrePatchActivity(), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) }); // ... } } ``` Now, you want to update your code to run `PostPatchActivity` instead. This represents your desired end state. ```csharp [Workflow] public class MyWorkflow { [WorkflowRun] public async Task RunAsync() { this.result = await Workflow.ExecuteActivityAsync( (MyActivities a) => a.PostPatchActivity(), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) }); // ... } } ``` The problem is that you cannot deploy `PostPatchActivity` directly until you're certain there are no more running Workflows created using the `PrePatchActivity` code, otherwise you are likely to cause a nondeterminism error. Instead, you'll need to deploy `PostPatchActivity` and use the [Patched](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_Patched_System_String_) method to determine which version of the code to execute. Patching is a three step process: 1. Use [Patched](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_Patched_System_String_) to patch in new code and run it alongside the old code. 2. Remove the old code and apply [DeprecatePatch](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_DeprecatePatch_System_String_). 3. Once all old Workflows have left retention, remove `DeprecatePatch`. ### Patching in new code {#using-patched-for-workflow-history-markers} Using `Patched` inserts a marker into the Workflow History. During replay, if a Worker encounters a history with that marker, it will fail the Workflow task when the Workflow code doesn't produce the same patch marker (in this case, `my-patch`). This ensures you can safely deploy code from `PostPatchActivity` as a "feature flag" alongside the original version (`PrePatchActivity`). ```csharp [Workflow] public class MyWorkflow { [WorkflowRun] public async Task RunAsync() { if (Workflow.Patched("my-patch")) { this.result = await Workflow.ExecuteActivityAsync( (MyActivities a) => a.PostPatchActivity(), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) }); } else { this.result = await Workflow.ExecuteActivityAsync( (MyActivities a) => a.PrePatchActivity(), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) }); } // ... } } ``` ### Deprecating patches {#deprecated-patches} After all Workflows started with `PrePatchActivity` code have left retention, you can [deprecate the patch](https://dotnet.temporal.io/api/Temporalio.Workflows.Workflow.html#Temporalio_Workflows_Workflow_DeprecatePatch_System_String_). Deprecated patches serve as a bridge between the final stage of the patching process and the final state that no longer has patches. They function similarly to regular patches by adding a marker to the Workflow History. However, this marker won't cause a replay failure when the Workflow code doesn't produce it. If, during the deployment of `PostPatchActivity`, there are still live Workers running `PrePatchActivity` code and these Workers pick up Workflow histories generated by `PostPatchActivity`, they will safely use the patched branch. ```csharp [Workflow] public class MyWorkflow { [WorkflowRun] public async Task RunAsync() { Workflow.DeprecatePatch("my-patch") this.result = await Workflow.ExecuteActivityAsync( (MyActivities a) => a.PostPatchActivity(), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) }); // ... } } ``` ### Removing a patch {#deploy-postpatchactivity} You can safely deploy `PostPatchActivity` once all Workflows labeled my-patch or earlier have left retention, based on the previously mentioned assertion. ```csharp [Workflow] public class MyWorkflow { [WorkflowRun] public async Task RunAsync() { this.result = await Workflow.ExecuteActivityAsync( (MyActivities a) => a.PostPatchActivity(), new() { StartToCloseTimeout = TimeSpan.FromMinutes(5) }); // ... } } ``` Patching allows you to make changes to currently running Workflows. It is a powerful method for introducing compatible changes without introducing non-determinism errors. ### Detailed Overview of the Patched Function This video provides an overview of how the `patched()` function works: For a more in-depth explanation, refer to the [Patching](/patching) Encyclopedia entry. ### Workflow cutovers To understand why Patching is useful, it's helpful to demonstrate cutting over an entire Workflow. Since incompatible changes only affect open Workflow Executions of the same type, you can avoid determinism errors by creating a whole new Workflow when making changes. To do this, you can copy the Workflow Definition function, giving it a different name, and register both names with your Workers. For example, you would duplicate `SayHelloWorkflow` as `SayHelloWorkflowV2`: ```csharp [Workflow] public class SayHelloWorkflow { [WorkflowRun] # this function contains the original code } [Workflow] public class SayHelloWorkflowV2 { [WorkflowRun] # this function contains the updated code } ``` You would then need to update the Worker configuration, and any other identifier strings, to register both Workflow Types: ```csharp using var worker = new TemporalWorker( client, new TemporalWorkerOptions("greeting-tasks") .AddWorkflow() .AddWorkflow()); ``` The downside of this method is that it requires you to duplicate code and to update any commands used to start the Workflow. This can become impractical over time. This method also does not provide a way to version any still-running Workflows -- it is essentially just a cutover, unlike Patching. ### Testing a Workflow for replay safety To determine whether your Workflow your needs a patch, or that you've patched it successfully, you should incorporate [Replay Testing](/develop/dotnet/testing-suite#replay). --- ## Environment configuration Temporal CLI and SDKs support configuring a Temporal Client using environment variables and TOML configuration files, rather than setting connection options programmatically in your code. This decouples connection settings from application logic, making it easier to manage different environments such as development, staging, and production without code changes. For a list of all available configuration settings, their corresponding environment variables, and TOML file paths, refer to [Temporal Client Environment Configuration Reference](../references/client-environment-configuration). :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Environment configuration is in [Public Preview](../evaluate/development-production-features/release-stages.mdx#public-preview) in the Temporal Go, Python, Ruby, TypeScript, and .NET SDKs, as well as the Temporal CLI. ::: ## Configuration methods You can configure your client using a TOML file, environment variables, or a combination of both. The configuration is loaded with a specific order of precedence: 1. Environment variables: These have the highest precedence. If an environment variable defines a setting, it will always override any value set in a configuration file. This makes it easy to provide secrets in dynamic environments. 2. TOML configuration file: A TOML file can be used to define one or more configuration profiles. This file is located by checking the following sources in order: 1. The path specified by the `TEMPORAL_CONFIG_FILE` environment variable. 2. The default configuration path for your operating system: - Linux: `~/.config/temporalio/temporal.toml` - macOS: `$HOME/Library/Application Support/temporalio/temporal.toml` - Windows: `%AppData%\temporalio\temporal.toml` ## TOML file configuration You can use configuration profiles to maintain separate configurations within a single file for different environments. The Temporal client uses the `default` profile unless you specify another via the `TEMPORAL_PROFILE` environment variable or in the SDK's load options. If a requested profile doesn't exist, the application will return an error. Here is an example `temporal.toml` file that defines two profiles: `default` for local development and `prod` for production. ```toml --- # Default profile for local development [profile.default] address = "localhost:7233" namespace = "default" --- # Optional: Add custom gRPC headers [profile.default.grpc_meta] my-custom-header = "development-value" trace-id = "dev-trace-123" --- # Production profile for Temporal Cloud [profile.prod] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" --- # TLS configuration for production [profile.prod.tls] --- # TLS is auto-enabled when this TLS config or API key is present, but you can configure it explicitly --- # Use certificate files for mTLS client_cert_path = "/etc/temporal/certs/client.pem" client_key_path = "/etc/temporal/certs/client.key" --- # Custom headers for production [profile.prod.grpc_meta] environment = "production" service-version = "v1.2.3" --- # Staging profile with inline certificate data [profile.staging] address = "staging.temporal.example.com:7233" namespace = "staging" [profile.staging.tls] --- # Example of providing certificate data directly (base64 or PEM format) client_cert_data = """-----BEGIN CERTIFICATE----- MIICertificateDataHere... -----END CERTIFICATE-----""" client_key_data = """-----BEGIN PRIVATE KEY----- MIIPrivateKeyDataHere... -----END PRIVATE KEY-----""" ``` ## CLI integration The Temporal CLI tool includes `temporal config` commands that allow you to read and write to the TOML configuration file. This provides a convenient way to manage your connection profiles without manually editing the file. Refer to [Temporal CLI Reference - `temporal config`](../cli/config.mdx) for more details. - `temporal config get `: Reads a specific value from the current profile. - `temporal config set `: Sets a property in the current profile. - `temporal config delete `: Deletes a property from the current profile. - `temporal config list`: Lists all available profiles in the config file. These CLI commands directly manipulate the `temporal.toml` file. This differs from the SDKs, which only _read_ from the file and environment at runtime to establish a client connection. You can select a profile for the CLI to use with the `--profile` flag. For example, `temporal --profile prod ...`. The following code blocks provide copy-paste-friendly examples for setting up CLI profiles for both local development and Temporal Cloud. This example shows how to set up a default profile for local development and a `prod` profile for Temporal Cloud using an API key. ```bash --- # (Optional) initialize the default profile for local development temporal config set --prop address --value "localhost:7233" temporal config set --prop namespace --value "default" --- # Configure a Temporal Cloud profile that authenticates with an API key temporal --profile prod config set --prop address --value "..api.temporal.io:7233" temporal --profile prod config set --prop namespace --value "." temporal --profile prod config set --prop api_key --value "" ``` This example shows how to set up a more advanced Temporal Cloud profile with TLS overrides and custom gRPC metadata. ```bash --- # Base API key properties (replace the placeholders) temporal --profile prod config set --prop address --value "..api.temporal.io:7233" temporal --profile prod config set --prop namespace --value "." temporal --profile prod config set --prop api_key --value "" --- # Optional TLS overrides (only needed when you must pin certs or tweak SNI) temporal --profile prod config set --prop tls.server_name --value "." temporal --profile prod config set --prop tls.ca_cert_path --value "/path/to/ca.pem" --- # Optional gRPC metadata for observability or routing temporal --profile prod config set --prop grpc_meta.environment --value "production" temporal --profile prod config set --prop grpc_meta.service-version --value "v1.2.3" ``` ## Load configuration profile and environment variables If you don't specify a profile, the SDKs load the `default` profile and the environment variables. If you haven't set `TEMPORAL_CONFIG_FILE`, the SDKs will look for the configuration file in the default location. Refer to [Configuration methods](#configuration-methods) for the default locations for your operating system. No matter what profile you choose to load, environment variables are always loaded when you use the APIs in the environment configuration package to load Temporal Client connection options. They always take precedence over TOML file settings in the profiles. To load the `default` profile along with any environment variables in Python, use the `ClientConfigProfile.load()` method from the `temporalio.envconfig` package. ```python {7-8} from temporalio.client import Client from temporalio.envconfig import ClientConfigProfile async def main(): # Load the "default" profile from default locations and environment variables. default_profile = ClientConfigProfile.load() connect_config = default_profile.to_client_connect_config() # Connect to the client using the loaded configuration. client = await Client.connect(**connect_config) print(f"✅ Client connected to {client.service_client.config.target_host} in namespace '{client.namespace}'") if __name__ == "__main__": asyncio.run(main()) ``` To load the `default` profile along with any environment variables in Go, use the `envconfig.MustLoadDefaultClientOptions()` function from the `temporalio.envconfig` package. ```go {13} package main "fmt" "log" "go.temporal.io/sdk/client" "go.temporal.io/sdk/contrib/envconfig" ) func main() { // Loads the "default" profile from the standard location and environment variables. c, err := client.Dial(envconfig.MustLoadDefaultClientOptions()) if err != nil { log.Fatalf("Failed to create client: %v", err) } defer c.Close() fmt.Printf("✅ Connected to Temporal Service") } ``` To load the `default` profile along with any environment variables in Ruby, use the `EnvConfig::ClientConfig.load_client_connect_options()` method from the `temporalio.env_config` package. ```Ruby {16-18} require 'temporalio/client' require 'temporalio/env_config' def main puts '--- Loading default profile from config.toml ---' # For this sample to be self-contained, we explicitly provide the path to # the config.toml file included in this directory. # By default though, the config.toml file will be loaded from # ~/.config/temporalio/temporal.toml (or the equivalent standard config directory on your OS). config_file = File.join(__dir__, 'config.toml') # load_client_connect_options is a helper that loads a profile and prepares # the configuration for Client.connect. By default, it loads the # "default" profile. args, kwargs = Temporalio::EnvConfig::ClientConfig.load_client_connect_options( config_source: Pathname.new(config_file) ) puts "Loaded 'default' profile from #{config_file}." puts " Address: #{args[0]}" puts " Namespace: #{args[1]}" puts " gRPC Metadata: #{kwargs[:rpc_metadata]}" puts "\nAttempting to connect to client..." begin client = Temporalio::Client.connect(*args, **kwargs) puts '✅ Client connected successfully!' sys_info = client.workflow_service.get_system_info(Temporalio::Api::WorkflowService::V1::GetSystemInfoRequest.new) puts "✅ Successfully verified connection to Temporal server!\n#{sys_info}" rescue StandardError => e puts "❌ Failed to connect: #{e}" end end ``` To load the `default` profile along with any environment variables in .NET C#, use the `ClientEnvConfig.LoadClientConnectOptions()` method from the `Temporalio.Client.EnvConfig` package. ```csharp {22,27-30} using Temporalio.Client; using Temporalio.Client.EnvConfig; namespace TemporalioSamples.EnvConfig; /// /// Sample demonstrating loading the default environment configuration profile /// from a TOML file. /// public static class LoadFromFile { public static async Task RunAsync() { Console.WriteLine("--- Loading default profile from config.toml ---"); try { // For this sample to be self-contained, we explicitly provide the path to // the config.toml file included in this directory. // By default though, the config.toml file will be loaded from // ~/.config/temporalio/temporal.toml (or the equivalent standard config directory on your OS). var configFile = Path.Combine(Directory.GetCurrentDirectory(), "config.toml"); // LoadClientConnectOptions is a helper that loads a profile and prepares // the config for TemporalClient.ConnectAsync. By default, it loads the // "default" profile. var connectOptions = ClientEnvConfig.LoadClientConnectOptions(new ClientEnvConfig.ProfileLoadOptions { ConfigSource = DataSource.FromPath(configFile), }); Console.WriteLine($"Loaded 'default' profile from {configFile}."); Console.WriteLine($" Address: {connectOptions.TargetHost}"); Console.WriteLine($" Namespace: {connectOptions.Namespace}"); if (connectOptions.RpcMetadata?.Count > 0) { Console.WriteLine($" gRPC Metadata: {string.Join(", ", connectOptions.RpcMetadata.Select(kv => $"{kv.Key}={kv.Value}"))}"); } Console.WriteLine("\nAttempting to connect to client..."); var client = await TemporalClient.ConnectAsync(connectOptions); Console.WriteLine("✅ Client connected successfully!"); // Test the connection by checking the service var sysInfo = await client.Connection.WorkflowService.GetSystemInfoAsync(new()); Console.WriteLine("✅ Successfully verified connection to Temporal server!\n{0}", sysInfo); } catch (Exception ex) when (ex is not OperationCanceledException) { Console.WriteLine($"❌ Failed to connect: {ex.Message}"); } } } ``` To load the `default` profile along with any environment variables in TypeScript, use the `loadClientConnectConfig` helper from `@temporalio/envconfig` package. {/* SNIPSTART typescript-env-config-load-default-profile {"highlightedLines": "17-19,28-29"} */} [env-config/src/load-from-file.ts](https://github.com/temporalio/samples-typescript/blob/main/env-config/src/load-from-file.ts) ```ts {17-19,28-29} async function main() { console.log('--- Loading default profile from config.toml ---'); // For this sample to be self-contained, we explicitly provide the path to // the config.toml file included in this directory. // By default though, the config.toml file will be loaded from // ~/.config/temporalio/temporal.toml (or the equivalent standard config directory on your OS). const configFile = resolve(__dirname, '../config.toml'); // loadClientConnectConfig is a helper that loads a profile and prepares // the configuration for Connection.connect and Client. By default, it loads the // "default" profile. const config = loadClientConnectConfig({ configSource: { path: configFile }, }); console.log(`Loaded 'default' profile from ${configFile}.`); console.log(` Address: ${config.connectionOptions.address}`); console.log(` Namespace: ${config.namespace}`); console.log(` gRPC Metadata: ${JSON.stringify(config.connectionOptions.metadata)}`); console.log('\nAttempting to connect to client...'); try { const connection = await Connection.connect(config.connectionOptions); const client = new Client({ connection, namespace: config.namespace }); console.log('✅ Client connected successfully!'); await connection.close(); } catch (err) { console.log(`❌ Failed to connect: ${err}`); } } main().catch((err) => { console.error(err); process.exit(1); }); ``` {/* SNIPEND */} To load the `default` profile along with any environment variables in Java, use the `ClientConfigProfile.load` method from the `envconfig` package. This method will load the `default` profile from the default location and any environment variables. Environment variables take precedence over the configuration file settings. Then use `profile.toWorkflowServiceStubsOptions` and `profile.toWorkflowClientOptions` to convert the profile to `WorkflowServiceStubsOptions` and `WorkflowClientOptions` respectively. Then use `WorkflowClient.newInstance` to create a Temporal Client. ```java public class LoadFromFile { private static final Logger logger = LoggerFactory.getLogger(LoadFromFile.class); public static void main(String[] args) { try { ClientConfigProfile profile = ClientConfigProfile.load(LoadClientConfigProfileOptions.newBuilder().build()); WorkflowServiceStubsOptions serviceStubsOptions = profile.toWorkflowServiceStubsOptions(); WorkflowClientOptions clientOptions = profile.toWorkflowClientOptions(); try { // Create the workflow client using the loaded configuration WorkflowClient client = WorkflowClient.newInstance( WorkflowServiceStubs.newServiceStubs(serviceStubsOptions), clientOptions); // Test the connection by getting system info var systemInfo = client .getWorkflowServiceStubs() .blockingStub() .getSystemInfo( io.temporal.api.workflowservice.v1.GetSystemInfoRequest.getDefaultInstance()); logger.info("✅ Client connected successfully!"); logger.info(" Server version: {}", systemInfo.getServerVersion()); } catch (Exception e) { logger.error("❌ Failed to connect: {}", e.getMessage()); } } catch (Exception e) { logger.error("Failed to load configuration: {}", e.getMessage(), e); System.exit(1); } } } ``` ## Load configuration from a custom path To load configuration from a non-standard file location without relying on the `TEMPORAL_CONFIG_FILE` environment variable, you can use a function from the `temporalio.envconfig` package. The specific method you need to call depends on the SDK you are using. This is useful if you store application-specific configurations separately. Loading connection options using this method will still respect environment variables, which take precedence over the file settings. To load a specific profile from a custom path in Python, use the `ClientConfig.load_client_connect_config()` method with the `config_file` parameter. In this example, we construct the path to a `config.toml` file located in the same directory as the script. After loading the connection options, you can override specific settings programmatically before passing them to `Client.connect()`. ```py {12-13,21-23} from pathlib import Path from temporalio.client import Client from temporalio.envconfig import ClientConfig async def main(): """ Demonstrates loading a named profile and overriding values programmatically. """ print("--- Loading 'staging' profile with programmatic overrides ---") config_file = Path(__file__).parent / "config.toml" profile_name = "staging" print( "The 'staging' profile in config.toml has an incorrect address (localhost:9999)." ) print("We'll programmatically override it to the correct address.") # Load the 'staging' profile. connect_config = ClientConfig.load_client_connect_config( profile=profile_name, config_file=str(config_file), ) # Override the target host to the correct address. # This is the recommended way to override configuration values. connect_config["target_host"] = "localhost:7233" print(f"\nLoaded '{profile_name}' profile from {config_file} with overrides.") print( f" Address: {connect_config.get('target_host')} (overridden from localhost:9999)" ) print(f" Namespace: {connect_config.get('namespace')}") print("\nAttempting to connect to client...") try: await Client.connect(**connect_config) # type: ignore print("✅ Client connected successfully!") except Exception as e: print(f"❌ Failed to connect: {e}") if __name__ == "__main__": asyncio.run(main()) ``` To load a specific profile from a custom filepath in Go, use the `envconfig.LoadClientOptions()` function with the `ConfigFilePath` field set in the `LoadClientOptionsRequest` struct. Use the `ConfigFileProfile` field to specify the profile name. After loading the connection options, you can override specific settings programmatically before passing them to `client.Dial()`. Refer to the [GO SDK API documentation](https://pkg.go.dev/go.temporal.io/sdk/contrib/envconfig) for all available options. ```go {14-16} package main "fmt" "log" "go.temporal.io/sdk/client" "go.temporal.io/sdk/contrib/envconfig" ) func main() { // Load a specific profile from the TOML config file. // This requires a [profile.prod] section in your config. opts, err := envconfig.LoadClientOptions(envconfig.LoadClientOptionsRequest{ ConfigFileProfile: "prod", ConfigFilePath: "/Users/yourname/.config/my-app/temporal.toml", }) if err != nil { log.Fatalf("Failed to load 'prod' profile: %v", err) } // Programmatically override the Namespace value. opts.Namespace = "new-namespace" c, err := client.Dial(opts) if err != nil { log.Fatalf("Failed to connect using 'prod' profile: %v", err) } defer c.Close() fmt.Printf("✅ Connected to Temporal namespace %q on %s using 'prod' profile\n", c.Options().Namespace, c.Options().HostPort) } ``` To load a specific profile from a custom path in Ruby, use the `EnvConfig::ClientConfig.load_client_connect_options()` method with the `config_source` parameter. In this example, we construct the path to a `config.toml` file located in the same directory as the script. Use the `profile` parameter to specify the profile name. After loading the connection options, you can override specific settings programmatically before passing them to `Client.connect()`. Refer to the [Ruby SDK API documentation](https://ruby.temporal.io/Temporalio/EnvConfig.html) for all available options. ```Ruby {7-8,14-16} require 'temporalio/client' require 'temporalio/env_config' def main puts "--- Loading 'staging' profile with programmatic overrides ---" config_file = File.join(__dir__, 'config.toml') profile_name = 'staging' puts "The 'staging' profile in config.toml has an incorrect address (localhost:9999)." puts "We'll programmatically override it to the correct address." # Load the 'staging' profile. args, kwargs = Temporalio::EnvConfig::ClientConfig.load_client_connect_options( profile: profile_name, config_source: Pathname.new(config_file) ) # Override the target host to the correct address. # This is the recommended way to override configuration values. args[0] = 'localhost:7233' puts "\nLoaded '#{profile_name}' profile from #{config_file} with overrides." puts " Address: #{args[0]} (overridden from localhost:9999)" puts " Namespace: #{args[1]}" puts "\nAttempting to connect to client..." begin client = Temporalio::Client.connect(*args, **kwargs) puts '✅ Client connected successfully!' sys_info = client.workflow_service.get_system_info(Temporalio::Api::WorkflowService::V1::GetSystemInfoRequest.new) puts "✅ Successfully verified connection to Temporal server!\n#{sys_info}" rescue StandardError => e puts "❌ Failed to connect: #{e}" end end main if $PROGRAM_NAME == __FILE__ ``` To load a specific profile from a custom path in .NET C#, use the `ClientEnvConfig.LoadClientConnectOptions()` method with the `ProfileLoadOptions` parameter. Use the `Profile` property to specify the profile name and the `ConfigSource` property to specify the file path. After loading the connection options, you can override specific settings programmatically before passing them to `TemporalClient.ConnectAsync()`. Refer to the [C# SDK API documentation](https://dotnet.temporal.io/api/Temporalio.Common.EnvConfig.html) for all available options. ```csharp {18-19,25-28} using Temporalio.Client; using Temporalio.Client.EnvConfig; namespace TemporalioSamples.EnvConfig; /// /// Sample demonstrating loading a named environment configuration profile and /// programmatically overriding its values. /// public static class LoadProfile { public static async Task RunAsync() { Console.WriteLine("--- Loading 'staging' profile with programmatic overrides ---"); try { var configFile = Path.Combine(Directory.GetCurrentDirectory(), "config.toml"); var profileName = "staging"; Console.WriteLine("The 'staging' profile in config.toml has an incorrect address (localhost:9999)."); Console.WriteLine("We'll programmatically override it to the correct address."); // Load the 'staging' profile var connectOptions = ClientEnvConfig.LoadClientConnectOptions(new ClientEnvConfig.ProfileLoadOptions { Profile = profileName, ConfigSource = DataSource.FromPath(configFile), }); // Override the target host to the correct address. // This is the recommended way to override configuration values. connectOptions.TargetHost = "localhost:7233"; Console.WriteLine($"\nLoaded '{profileName}' profile from {configFile} with overrides."); Console.WriteLine($" Address: {connectOptions.TargetHost} (overridden from localhost:9999)"); Console.WriteLine($" Namespace: {connectOptions.Namespace}"); Console.WriteLine("\nAttempting to connect to client..."); var client = await TemporalClient.ConnectAsync(connectOptions); Console.WriteLine("✅ Client connected successfully!"); // Test the connection by checking the service var sysInfo = await client.Connection.WorkflowService.GetSystemInfoAsync(new()); Console.WriteLine("✅ Successfully verified connection to Temporal server!\n{0}", sysInfo); } catch (Exception ex) when (ex is not OperationCanceledException) { Console.WriteLine($"❌ Failed to connect: {ex.Message}"); } } } ``` To load a specific profile from a custom path in TypeScript, use the `loadClientConnectConfig` helper from `@temporalio/envconfig` package with the `profile` and `configFile` options. {/* SNIPSTART typescript-env-config-load-default-profile {"highlightedLines": "17-19,28-29"} */} [env-config/src/load-from-file.ts](https://github.com/temporalio/samples-typescript/blob/main/env-config/src/load-from-file.ts) ```ts {17-19,28-29} async function main() { console.log('--- Loading default profile from config.toml ---'); // For this sample to be self-contained, we explicitly provide the path to // the config.toml file included in this directory. // By default though, the config.toml file will be loaded from // ~/.config/temporalio/temporal.toml (or the equivalent standard config directory on your OS). const configFile = resolve(__dirname, '../config.toml'); // loadClientConnectConfig is a helper that loads a profile and prepares // the configuration for Connection.connect and Client. By default, it loads the // "default" profile. const config = loadClientConnectConfig({ configSource: { path: configFile }, }); console.log(`Loaded 'default' profile from ${configFile}.`); console.log(` Address: ${config.connectionOptions.address}`); console.log(` Namespace: ${config.namespace}`); console.log(` gRPC Metadata: ${JSON.stringify(config.connectionOptions.metadata)}`); console.log('\nAttempting to connect to client...'); try { const connection = await Connection.connect(config.connectionOptions); const client = new Client({ connection, namespace: config.namespace }); console.log('✅ Client connected successfully!'); await connection.close(); } catch (err) { console.log(`❌ Failed to connect: ${err}`); } } main().catch((err) => { console.error(err); process.exit(1); }); ``` {/* SNIPEND */} To load a profile configuration file from a custom path in Java, use the `ClientConfigProfile.load` method from the `envconfig` package with the `ConfigFilePath` parameter. This method will load the profile from the custom path and any environment variables. Environment variables take precedence over the configuration file settings. ```java {21-25} public class LoadFromFile { private static final Logger logger = LoggerFactory.getLogger(LoadFromFile.class); public static void main(String[] args) { try { String configFilePath = Paths.get(LoadFromFile.class.getResource("/config.toml").toURI()).toString(); ClientConfigProfile profile = ClientConfigProfile.load( LoadClientConfigProfileOptions.newBuilder() .setConfigFilePath(configFilePath) .build()); WorkflowServiceStubsOptions serviceStubsOptions = profile.toWorkflowServiceStubsOptions(); WorkflowClientOptions clientOptions = profile.toWorkflowClientOptions(); try { // Create the workflow client using the loaded configuration WorkflowClient client = WorkflowClient.newInstance( WorkflowServiceStubs.newServiceStubs(serviceStubsOptions), clientOptions); // Test the connection by getting system info var systemInfo = client .getWorkflowServiceStubs() .blockingStub() .getSystemInfo( io.temporal.api.workflowservice.v1.GetSystemInfoRequest.getDefaultInstance()); logger.info("✅ Client connected successfully!"); logger.info(" Server version: {}", systemInfo.getServerVersion()); } catch (Exception e) { logger.error("❌ Failed to connect: {}", e.getMessage()); } } catch (Exception e) { logger.error("Failed to load configuration: {}", e.getMessage(), e); System.exit(1); } } } ``` --- ## Asynchronous Activity completion - Go SDK [Asynchronous Activity Completion](/activity-execution#asynchronous-activity-completion) enables the Activity Function to return without the Activity Execution completing. There are three steps to follow: 1. The Activity provides the external system with identifying information needed to complete the Activity Execution. Identifying information can be a [Task Token](/activity-execution#task-token), or a combination of Namespace, Workflow Id, and Activity Id. 2. The Activity Function completes in a way that identifies it as waiting to be completed by an external system. 3. The Temporal Client is used to Heartbeat and complete the Activity. 4. Provide the external system with a Task Token to complete the Activity Execution. To do this, use the `GetInfo()` API from the `go.temporal.io/sdk/activity` package. ```go // Retrieve the Activity information needed to asynchronously complete the Activity. activityInfo := activity.GetInfo(ctx) taskToken := activityInfo.TaskToken // Send the taskToken to the external service that will complete the Activity. ``` 2. Return an `activity.ErrResultPending` error to indicate that the Activity is completing asynchronously. ```go return "", activity.ErrResultPending ``` 3. Use the Temporal Client to complete the Activity using the Task Token. ```go // Instantiate a Temporal service client. // The same client can be used to complete or fail any number of Activities. // The client is a heavyweight object that should be created once per process. temporalClient, err := client.Dial(client.Options{}) // Complete the Activity. temporalClient.CompleteActivity(context.Background(), taskToken, result, nil) ``` The following are the parameters of the `CompleteActivity` function: - `taskToken`: The value of the binary `TaskToken` field of the `ActivityInfo` struct retrieved inside the Activity. - `result`: The return value to record for the Activity. The type of this value must match the type of the return value declared by the Activity function. - `err`: The error code to return if the Activity terminates with an error. If `err` is not null, the value of the `result` field is ignored. To fail the Activity, you would do the following: ```go // Fail the Activity. client.CompleteActivity(context.Background(), taskToken, nil, err) ``` --- ## Benign exceptions - Go SDK **How to mark an Activity error as benign using the Temporal Go SDK** When Activities return errors that are expected or not severe, they can create noise in your logs, metrics, and OpenTelemetry traces, making it harder to identify real issues. By marking these errors as benign, you can exclude them from your observability data while still handling them in your Workflow logic. To mark an error as benign, use [`temporal.NewApplicationErrorWithOptions`](https://pkg.go.dev/go.temporal.io/sdk/temporal#NewApplicationErrorWithOptions) and set the `Category` field to `temporal.ApplicationErrorCategoryBenign` in the `ApplicationErrorOptions`. Benign errors: - Have Activity failure logs downgraded to DEBUG level - Do not emit Activity failure metrics - Do not set the OpenTelemetry failure status to ERROR ```go "go.temporal.io/sdk/activity" "go.temporal.io/sdk/temporal" ) func MyActivity(ctx context.Context) (string, error) { result, err := callExternalService() if err != nil { // Mark this error as benign since it's expected return "", temporal.NewApplicationErrorWithOptions( err.Error(), "", temporal.ApplicationErrorOptions{ Category: temporal.ApplicationErrorCategoryBenign, }, ) } return result, nil } ``` Use benign exceptions for Activity errors that occur regularly as part of normal operations, such as polling an external service that isn't ready yet, or handling expected transient failures that will be retried. --- ## Interrupt a Workflow - Go SDK This pages shows the following: - How to handle a Cancellation request within a Workflow. - How to set an Activity Heartbeat Timeout. - How to listen for and handle a Cancellation request within an Activity. - How to send a Cancellation request from a Temporal Client. - Heartbeating after a Cancellation. ## Handle Cancellation in Workflow {#handle-cancellation-in-workflow} **How to handle a Cancellation in a Workflow in Go.** Workflow Definitions can be written to handle execution cancellation requests with Go's `defer` and the `workflow.NewDisconnectedContext` API. In the Workflow Definition, there is a special Activity that handles clean up should the execution be cancelled. If the Workflow receives a Cancellation Request, but all Activities gracefully handle the Cancellation, and/or no Activities are skipped then the Workflow status will be Complete. It is completely up to the needs of the business process and your use case which determines whether you want to return the Cancellation error to show a Canceled status or Complete status regardless of whether a Cancellation has propagated to and/or skipped Activities. [sample-apps/go/features/cancellation/workflow.go](https://github.com/temporalio/documentation/blob/main/sample-apps/go/features/cancellation/workflow.go) ```go // ... // YourWorkflow is a Workflow Definition that shows how it can be canceled. func YourWorkflow(ctx workflow.Context) error { // ... activityOptions := workflow.ActivityOptions{ // ... HeartbeatTimeout: 5 * time.Second, // Set WaitForCancellation to true to have the Workflow wait to return // until all in progress Activities have completed, failed, or accepted the Cancellation. WaitForCancellation: true, } defer func() { // This logic ensures cleanup only happens if there is a Cancelation error if !errors.Is(ctx.Err(), workflow.ErrCanceled) { return } // For the Workflow to execute an Activity after it receives a Cancellation Request // It has to get a new disconnected context newCtx, _ := workflow.NewDisconnectedContext(ctx) // This Activity is only executed if err := workflow.ExecuteActivity(newCtx, a.CleanupActivity).Get(ctx, nil) if err != nil { logger.Error("CleanupActivity failed", "Error", err) } }() // ... err := workflow.ExecuteActivity(ctx, a.ActivityToBeCanceled).Get(ctx, &result) // ... // This call to execute the Activity is expected to return an error "canceled". // And the Activity Execution is skipped. err = workflow.ExecuteActivity(ctx, a.ActivityToBeSkipped).Get(ctx, nil) // ... // Return any errors. // If a CanceledError is returned, the Workflow changes to a Canceled state. return err } ``` ## Handle Cancellation in an Activity {#handle-cancellation-in-an-activity} **How to handle a Cancellation in an Activity in Go.** Ensure that the Activity is Heartbeating to receive the Cancellation request and stop execution. View the source code {' '} in the context of the rest of the application code. ```go // ActivityToBeCanceled is the Activity that will respond to the Cancellation Request func (a *Activities) ActivityToBeCanceled(ctx context.Context) (string, error) { // ... // A for select statement is a common approach to listening for a Cancellation is an Activity for { select { case <-time.After(1 * time.Second): logger.Info("Heartbeating...") activity.RecordHeartbeat(ctx, "") // Listen for ctx.Done() to know if a Cancellation Request has propagated to the Activity. case <-ctx.Done(): logger.Info("This Activity is canceled!") return "I am canceled by Done", nil } } } // ... ``` ## Request Cancellation {#request-cancellation} **How to request Cancellation of a Workflow and Activities in Go.** Use the `CancelWorkflow` API to cancel a Workflow Execution using its Id. View the source code {' '} in the context of the rest of the application code. ```go func main() { // ... // Call the CancelWorkflow API to cancel a Workflow // In this call we are relying on the Workflow Id only. // But a Run Id can also be supplied to ensure the correct Workflow is Canceled. err = temporalClient.CancelWorkflow(context.Background(), cancellation.WorkflowId, "") if err != nil { log.Fatalln("Unable to cancel Workflow Execution", err) } // ... } ``` ## Heartbeating after a Cancellation Sometimes you may want to continue running your Activity, even after a Cancellation has been issued. You may want to completely ignore the cancellation and continue Activity execution, including Heartbeating, or you may want to send one final Heartbeat after Cancellation. Even though the context is cancelled when the Workflow is Cancelled, you are still able to send Activity Heartbeats. When you call `activity.RecordHeartbeat` after Cancellation has occurred, a `WARN RecordActivityHeartbeat with error Error context canceled` message will be logged, and a `context canceled` error will be returned from the call. However, the Heartbeat **has** still been sent. ## Reset a Workflow Execution {#reset} Resetting a Workflow Execution terminates the current Workflow Execution and starts a new Workflow Execution from a point you specify in its Event History. Use reset when a Workflow is blocked due to a non-deterministic error or other issues that prevent it from completing. When you reset a Workflow, the Event History up to the reset point is copied to the new Workflow Execution, and the Workflow resumes from that point with the current code. Reset only works if you've fixed the underlying issue, such as removing non-deterministic code. Any progress made after the reset point will be discarded. Provide a reason when resetting, as it will be recorded in the Event History. 1. Navigate to the Workflow Execution details page, 2. Click the **Reset** button in the top right dropdown menu, 3. Select the Event ID to reset to, 4. Provide a reason for the reset, 5. Confirm the reset. The Web UI shows available reset points and creates a link to the new Workflow Execution after the reset completes. Use the `temporal workflow reset` command to reset a Workflow Execution: ```bash temporal workflow reset \ --workflow-id \ --event-id \ --reason "Reason for reset" ``` For example: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" ``` By default, the command resets the latest Workflow Execution in the `default` Namespace. Use `--run-id` to reset a specific run. Use `--namespace` to specify a different Namespace: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" \ --namespace my-namespace \ --tls-cert-path /path/to/cert.pem \ --tls-key-path /path/to/key.pem ``` Monitor the new Workflow Execution after resetting to ensure it completes successfully. --- ## Child Workflows - Go SDK This page shows how to do the following: - [Start a Child Workflow Execution](#child-workflows) - [Set a Parent Close Policy](#parent-close-policy) ## Start a Child Workflow Execution {#child-workflows} **How to start a Child Workflow Execution using the Go SDK.** A [Child Workflow Execution](/child-workflows) is a Workflow Execution that is scheduled from within another Workflow using a Child Workflow API. When using a Child Workflow API, Child Workflow related Events ([StartChildWorkflowExecutionInitiated](/references/events#startchildworkflowexecutioninitiated), [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted), [ChildWorkflowExecutionCompleted](/references/events#childworkflowexecutioncompleted), etc...) are logged in the Workflow Execution Event History. Always block progress until the [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted) Event is logged to the Event History to ensure the Child Workflow Execution has started. After that, Child Workflow Executions may be abandoned using the _Abandon_ [Parent Close Policy](/parent-close-policy) set in the Child Workflow Options. To be sure that the Child Workflow Execution has started, first call the Child Workflow Execution method on the instance of Child Workflow future, which returns a different future. Then get the value of an object that acts as a proxy for a result that is initially unknown, which is what waits until the Child Workflow Execution has spawned. To spawn a [Child Workflow Execution](/child-workflows) in Go, use the [`ExecuteChildWorkflow`](https://pkg.go.dev/go.temporal.io/sdk/workflow#ExecuteChildWorkflow) API, which is available from the `go.temporal.io/sdk/workflow` package. The `ExecuteChildWorkflow` call requires an instance of [`workflow.Context`](https://pkg.go.dev/go.temporal.io/sdk/workflow#Context), with an instance of [`workflow.ChildWorkflowOptions`](https://pkg.go.dev/go.temporal.io/sdk/workflow#ChildWorkflowOptions) applied to it, the Workflow Type, and any parameters that should be passed to the Child Workflow Execution. `workflow.ChildWorkflowOptions` contain the same fields as `client.StartWorkflowOptions`. Workflow Option fields automatically inherit their values from the Parent Workflow Options if they are not explicitly set. If a custom `WorkflowID` is not set, one is generated when the Child Workflow Execution is spawned. Use the [`WithChildOptions`](https://pkg.go.dev/go.temporal.io/sdk/workflow#WithChildOptions) API to apply Child Workflow Options to the instance of `workflow.Context`. The `ExecuteChildWorkflow` call returns an instance of a [`ChildWorkflowFuture`](https://pkg.go.dev/go.temporal.io/sdk/workflow#ChildWorkflowFuture). Call the `.Get()` method on the instance of `ChildWorkflowFuture` to wait for the result. ```go func YourWorkflowDefinition(ctx workflow.Context, params ParentParams) (ParentResp, error) { childWorkflowOptions := workflow.ChildWorkflowOptions{} ctx = workflow.WithChildOptions(ctx, childWorkflowOptions) var result ChildResp err := workflow.ExecuteChildWorkflow(ctx, YourOtherWorkflowDefinition, ChildParams{}).Get(ctx, &result) if err != nil { // ... } // ... return resp, nil } func YourOtherWorkflowDefinition(ctx workflow.Context, params ChildParams) (ChildResp, error) { // ... return resp, nil } ``` ### Async Child Workflows To asynchronously spawn a Child Workflow Execution, the Child Workflow must have an "Abandon" Parent Close Policy set in the Child Workflow Options. Additionally, the Parent Workflow Execution must wait for the `ChildWorkflowExecutionStarted` Event to appear in its Event History before it completes. If the Parent makes the `ExecuteChildWorkflow` call and then immediately completes, the Child Workflow Execution does not spawn. To be sure that the Child Workflow Execution has started, first call the `GetChildWorkflowExecution` method on the instance of the `ChildWorkflowFuture`, which will return a different Future. Then call the `Get()` method on that Future, which is what will wait until the Child Workflow Execution has spawned. ```go // ... "go.temporal.io/api/enums/v1" ) func YourWorkflowDefinition(ctx workflow.Context, params ParentParams) (ParentResp, error) { childWorkflowOptions := workflow.ChildWorkflowOptions{ ParentClosePolicy: enums.PARENT_CLOSE_POLICY_ABANDON, } ctx = workflow.WithChildOptions(ctx, childWorkflowOptions) childWorkflowFuture := workflow.ExecuteChildWorkflow(ctx, YourOtherWorkflowDefinition, ChildParams{}) // Wait for the Child Workflow Execution to spawn var childWE workflow.Execution if err := childWorkflowFuture.GetChildWorkflowExecution().Get(ctx, &childWE); err != nil { return err } // ... return resp, nil } func YourOtherWorkflowDefinition(ctx workflow.Context, params ChildParams) (ChildResp, error) { // ... return resp, nil } ``` #### Set a Parent Close Policy {#parent-close-policy} **How to set a Parent Close Policy for a Child Workflow Execution using the Go SDK.** A [Parent Close Policy](/parent-close-policy) determines what happens to a Child Workflow Execution if its Parent changes to a Closed status (Completed, Failed, or Timed Out). The default Parent Close Policy option is set to terminate the Child Workflow Execution. In Go, a Parent Close Policy is set on the `ParentClosePolicy` field of an instance of [`workflow.ChildWorkflowOptions`](https://pkg.go.dev/go.temporal.io/sdk/workflow#ChildWorkflowOptions). The possible values can be obtained from the [`go.temporal.io/api/enums/v1`](https://pkg.go.dev/go.temporal.io/api/enums/v1#ParentClosePolicy) package. - `PARENT_CLOSE_POLICY_ABANDON` - `PARENT_CLOSE_POLICY_TERMINATE` - `PARENT_CLOSE_POLICY_REQUEST_CANCEL` The Child Workflow Options are then applied to the instance of `workflow.Context` by using the `WithChildOptions` API, which is then passed to the `ExecuteChildWorkflow()` call. - Type: [`ParentClosePolicy`](https://pkg.go.dev/go.temporal.io/api/enums/v1#ParentClosePolicy) - Default: `PARENT_CLOSE_POLICY_TERMINATE` ```go // ... "go.temporal.io/api/enums/v1" ) func YourWorkflowDefinition(ctx workflow.Context, params ParentParams) (ParentResp, error) { // ... childWorkflowOptions := workflow.ChildWorkflowOptions{ // ... ParentClosePolicy: enums.PARENT_CLOSE_POLICY_ABANDON, } ctx = workflow.WithChildOptions(ctx, childWorkflowOptions) childWorkflowFuture := workflow.ExecuteChildWorkflow(ctx, YourOtherWorkflowDefinition, ChildParams{}) // ... } func YourOtherWorkflowDefinition(ctx workflow.Context, params ChildParams) (ChildResp, error) { // ... return resp, nil } ``` --- ## Continue-As-New - Go SDK This page answers the following questions for Go developers: - [What is Continue-As-New?](#what) - [How to Continue-As-New?](#how) - [When is it right to Continue-as-New?](#when) - [How to test Continue-as-New?](#how-to-test) ## What is Continue-As-New? {#what} [Continue-As-New](/workflow-execution/continue-as-new) lets a Workflow Execution close successfully and creates a new Workflow Execution. You can think of it as a checkpoint when your Workflow gets too long or approaches certain scaling limits. The new Workflow Execution is in the same [chain](/workflow-execution#workflow-execution-chain); it keeps the same Workflow Id but gets a new Run Id and a fresh Event History. It also receives your Workflow's usual parameters. ## How to Continue-As-New using the Go SDK {#how} First, design your Workflow parameters so that you can pass in the "current state" when you Continue-As-New into the next Workflow run. This state is typically set to `None` for the original caller of the Workflow. View the source code {' '} in the context of the rest of the application code. ```go ClusterManagerInput struct { State *ClusterManagerState TestContinueAsNew bool } func newClusterManager(ctx workflow.Context, wfInput ClusterManagerInput) (*ClusterManager, error) { ```` The test hook in the above snippet is covered [below](#how-to-test). Inside your Workflow, return the [`NewContinueAsNewError`](https://pkg.go.dev/go.temporal.io/sdk/workflow#NewContinueAsNewError) error. This stops the Workflow right away and starts a new one. View the source code {' '} in the context of the rest of the application code. ```go return ClusterManagerResult{}, workflow.NewContinueAsNewError( ctx, ClusterManagerWorkflow, ClusterManagerInput{ State: &cm.state, TestContinueAsNew: cm.testContinueAsNew, }, ) ```` ### Considerations for Workflows with Message Handlers {#with-message-handlers} If you use Updates or Signals, don't call Continue-as-New from the handlers. Instead, wait for your handlers to finish in your main Workflow before you return `NewContinueAsNewError`. See the [`AllHandlersFinished`](message-passing#wait-for-message-handlers) example for guidance. ## When is it right to Continue-as-New using the Go SDK? {#when} Use Continue-as-New when your Workflow might hit [Event History Limits](/workflow-execution/event#event-history). Temporal tracks your Workflow's progress against these limits to let you know when you should Continue-as-New. Call `GetInfo(ctx).GetContinueAsNewSuggested()` to check if it's time. ## How to test Continue-as-New using the Go SDK {#how-to-test} Testing Workflows that naturally Continue-as-New may be time-consuming and resource-intensive. Instead, add a test hook to check your Workflow's Continue-as-New behavior faster in automated tests. For example, when `TestContinueAsNew == True`, this sample creates a test-only variable called `maxHistoryLength` and sets it to a small value. A helper method in the Workflow checks it each time it considers using Continue-as-New: View the source code {' '} in the context of the rest of the application code. ```go func (cm *ClusterManager) shouldContinueAsNew(ctx workflow.Context) bool { if workflow.GetInfo(ctx).GetContinueAsNewSuggested() { return true } if cm.maxHistoryLength > 0 && workflow.GetInfo(ctx).GetCurrentHistoryLength() > cm.maxHistoryLength { return true } return false } ``` --- ## Converters and encryption - Go SDK Temporal's security model is designed around client-side encryption of Payloads. A client may encrypt Payloads before sending them to the server, and decrypt them after receiving them from the server. This provides a high degree of confidentiality because the Temporal Server itself has absolutely no knowledge of the actual data. It also gives implementers more power and more freedom regarding which client is able to read which data -- they can control access with keys, algorithms, or other security measures. A Temporal developer adds client-side encryption of Payloads by providing a Custom Payload Codec to its Client. Depending on business needs, a complete implementation of Payload Encryption may involve selecting appropriate encryption algorithms, managing encryption keys, restricting a subset of their users from viewing payload output, or a combination of these. The server itself never adds encryption over Payloads. Therefore, unless client-side encryption is implemented, Payload data will be persisted in non-encrypted form to the data store, and any Client that can make requests to a Temporal namespace (including the Temporal UI and CLI) will be able to read Payloads contained in Workflows. When working with sensitive data, you should always implement Payload encryption. ## Use a custom Payload Codec in Go {#custom-payload-codec} **How to use a custom Payload Codec using the Go SDK.** **Step 1: Create a custom Payload Codec** Create a custom [PayloadCodec](https://pkg.go.dev/go.temporal.io/sdk/converter#PayloadCodec) implementation and define your encryption/compression and decryption/decompression logic in the `Encode` and `Decode` functions. The Payload Codec converts bytes to bytes. It must be used in an instance of [CodecDataConverter](https://pkg.go.dev/go.temporal.io/sdk/converter#CodecDataConverter) that wraps a Data Converter to do the [Payload](/dataconversion#payload) conversions, and applies the custom encoding and decoding in `PayloadCodec` to the converted Payloads. The following example from the [Data Converter sample](https://github.com/temporalio/samples-go/blob/main/codec-server/data_converter.go) shows how to create a custom `NewCodecDataConverter` that wraps an instance of a Data Converter with a custom `PayloadCodec`. ```go // Create an instance of Data Converter with your codec. var DataConverter = converter.NewCodecDataConverter( converter.GetDefaultDataConverter(), NewPayloadCodec(), ) //... // Create an instance of PaylodCodec. func NewPayloadCodec() converter.PayloadCodec { return &Codec{} } ``` Implement your encryption/compression logic in the `Encode` function and the decryption/decompression logic in the `Decode` function in your custom `PayloadCodec`, as shown in the following example. ```go // Codec implements converter.PayloadEncoder for snappy compression. type Codec struct{} // Encode implements converter.PayloadCodec.Encode. func (Codec) Encode(payloads []*commonpb.Payload) ([]*commonpb.Payload, error) { result := make([]*commonpb.Payload, len(payloads)) for i, p := range payloads { // Marshal proto origBytes, err := p.Marshal() if err != nil { return payloads, err } // Compress b := snappy.Encode(nil, origBytes) result[i] = &commonpb.Payload{ Metadata: map[string][]byte{converter.MetadataEncoding: []byte("binary/snappy")}, Data: b, } } return result, nil } // Decode implements converter.PayloadCodec.Decode. func (Codec) Decode(payloads []*commonpb.Payload) ([]*commonpb.Payload, error) { result := make([]*commonpb.Payload, len(payloads)) for i, p := range payloads { // Decode only if it's our encoding if string(p.Metadata[converter.MetadataEncoding]) != "binary/snappy" { result[i] = p continue } // Uncompress b, err := snappy.Decode(nil, p.Data) if err != nil { return payloads, err } // Unmarshal proto result[i] = &commonpb.Payload{} err = result[i].Unmarshal(b) if err != nil { return payloads, err } } return result, nil } ``` **Step 2: Set Data Converter to use custom Payload Codec.** Set your custom `PayloadCodec` with an instance of `DataConverter` in your `Dial` client options that you use to create the client. The following example shows how to set your custom Data Converter from a package called `mycodecpackage`. ```go //... c, err := client.Dial(client.Options{ // Set DataConverter here to ensure that Workflow inputs and results are // encoded as required. DataConverter: mycodecpackage.DataConverter, }) //... ``` - Data **encoding** is performed by the client using the converters and codecs provided by Temporal or your custom implementation when passing input to the Temporal Cluster. For example, plain text input is usually serialized into a JSON object, and can then be compressed or encrypted. - Data **decoding** may be performed by your application logic during your Workflows or Activities as necessary, but decoded Workflow results are never persisted back to the Temporal Cluster. Instead, they are stored encoded on the Cluster, and you need to provide an additional parameter when using the [temporal workflow show](/cli/workflow#show) command or when browsing the Web UI to view output. For reference, see the [Encryption](https://github.com/temporalio/samples-go/tree/main/encryption) sample. ### Using a Codec Server A Codec Server is an HTTP server that uses your custom Codec logic to decode your data remotely. The Codec Server is independent of the Temporal Cluster and decodes your encrypted payloads through predefined endpoints. You create, operate, and manage access to your Codec Server in your own environment. The Temporal CLI and the Web UI in turn provide built-in hooks to call the Codec Server to decode encrypted payloads on demand. Refer to the [Codec Server](/production-deployment/data-encryption) documentation for information on how to design and deploy a Codec Server. For reference, see the [Codec server](https://github.com/temporalio/samples-go/tree/main/codec-server) sample. ## Use custom Payload conversion {#custom-payload-conversion} **How to customize the conversion of a payload using the Go SDK.** Temporal SDKs provide a default [Payload Converter](/payload-converter) that can be customized to convert a custom data type to [Payload](/dataconversion#payload) and back. The order in which your encoding Payload Converters are applied depend on the order given to the Data Converter. You can set multiple encoding Payload Converters to run your conversions. When the Data Converter receives a value for conversion, it passes through each Payload Converter in sequence until the converter that handles the data type does the conversion. Payload Converters can be customized independently of a Payload Codec. Temporal's Converter architecture looks like this: ## How to use a custom Payload Converter in Go {#custom-payload-converter} **How to use a custom Payload Converter using the Go SDK.** Use a [Composite Data Converter](https://pkg.go.dev/go.temporal.io/sdk/converter#CompositeDataConverter) to apply custom, type-specific Payload Converters in a specified order. Defining a new Composite Data Converter is not always necessary to implement custom data handling. You can override the default Converter with a custom Codec, but a Composite Data Converter may be necessary for complex Workflow logic. `NewCompositeDataConverter` creates a new instance of `CompositeDataConverter` from an ordered list of type-specific Payload Converters. The following type-specific Payload Converters are available in the Go SDK, listed in the order that they are applied by the default Data Converter: - [NewNilPayloadConverter()](https://pkg.go.dev/go.temporal.io/sdk/converter#NilPayloadConverter.ToString) - [NewByteSlicePayloadConverter()](https://pkg.go.dev/go.temporal.io/sdk/converter#ByteSlicePayloadConverter) - [NewProtoJSONPayloadConverter()](https://pkg.go.dev/go.temporal.io/sdk/converter#ProtoJSONPayloadConverter) - [NewProtoPayloadConverter()](https://pkg.go.dev/go.temporal.io/sdk/converter#ProtoPayloadConverter) - [NewJSONPayloadConverter()](https://pkg.go.dev/go.temporal.io/sdk/converter#JSONPayloadConverter) The order in which the Payload Converters are applied is important because during serialization the Data Converter tries the Payload Converters in that specific order until a Payload Converter returns a non-nil Payload. To set your custom Payload Converter, use [`NewCompositeDataConverter`](https://pkg.go.dev/go.temporal.io/sdk/converter#NewCompositeDataConverter) and set it as the Data Converter in the Client options. - To replace the default Data Converter with a custom `NewCompositeDataConverter`, use the following. ```go dataConverter := converter.NewCompositeDataConverter(YourCustomPayloadConverter()) ``` - To add your custom type conversion to the default Data Converter, use the following to keep the defaults but set yours just before the default JSON fall through. ```go dataConverter := converter.NewCompositeDataConverter( converter.NewNilPayloadConverter(), converter.NewByteSlicePayloadConverter(), converter.NewProtoJSONPayloadConverter(), converter.NewProtoPayloadConverter(), YourCustomPayloadConverter(), converter.NewJSONPayloadConverter(), ) ``` --- ## Core application - Go SDK The Foundations section of the Temporal Developer's guide covers the minimum set of concepts and implementation details needed to build and run a [Temporal Application](/temporal#temporal-application)—that is, all the relevant steps to start a [Workflow Execution](#develop-workflows) that executes an [Activity](#activity-definition). In this section you can find the following: - [Run a development Temporal Service](#run-a-development-server) - [Develop a Workflow](#develop-workflows) - [Develop an Activity](#activity-definition) - [Start an Activity Execution](#activity-execution) - [Run a dev Worker](#develop-worker) - [Run a Temporal Cloud Worker](#run-a-temporal-cloud-worker) - [Set a Dynamic Workflow](#set-a-dynamic-workflow) - [Set a Dynamic Activity](#set-a-dynamic-activity) ## How to install the Temporal CLI and run a development server {#run-a-development-server} This section describes how to install the [Temporal CLI](/cli) and run a development Temporal Service. The local development Temporal Service comes packaged with the [Temporal Web UI](/web-ui). For information on deploying and running a self-hosted production Temporal Service, see the [Self-hosted guide](/self-hosted-guide), or sign up for [Temporal Cloud](/cloud) and let us run your production Temporal Service for you. The Temporal CLI is a tool for interacting with a Temporal Service from the command line and it includes a distribution of the Temporal Server and Web UI. This local development Temporal Service runs as a single process with zero runtime dependencies and it supports persistence to disk and in-memory mode through SQLite. **Install the Temporal CLI** The Temporal CLI is available on macOS, Windows, and Linux. ### macOS **How to install the Temporal CLI on macOS** Choose one of the following install methods to install the Temporal CLI on macOS: **Install the Temporal CLI with Homebrew** ```bash brew install temporal ``` **Install the Temporal CLI from CDN** 1. Select the platform and architecture needed. - Download for Darwin amd64: https://temporal.download/cli/archive/latest?platform=darwin&arch=amd64 - Download for Darwin arm64: https://temporal.download/cli/archive/latest?platform=darwin&arch=arm64 2. Extract the downloaded archive. 3. Add the `temporal` binary to your PATH. ### Linux **How to install the Temporal CLI on Linux** Choose one of the following install methods to install the Temporal CLI on Linux: **Install the Temporal CLI with Homebrew** ```bash brew install temporal ``` **Install the Temporal CLI from CDN** 1. Select the platform and architecture needed. - Download for Linux amd64: https://temporal.download/cli/archive/latest?platform=linux&arch=amd64 - Download for Linux arm64: https://temporal.download/cli/archive/latest?platform=linux&arch=arm64 2. Extract the downloaded archive. 3. Add the `temporal` binary to your PATH. ### Windows **How to install the Temporal CLI on Windows** Follow these instructions to install the Temporal CLI on Windows: **Install the Temporal CLI from CDN** 1. Select the platform and architecture needed and download the binary. - Download for Windows amd64: https://temporal.download/cli/archive/latest?platform=windows&arch=amd64 - Download for Windows arm64: https://temporal.download/cli/archive/latest?platform=windows&arch=arm64 2. Extract the downloaded archive. 3. Add the `temporal.exe` binary to your PATH. ### Start the Temporal Development Server Start the Temporal Development Server by using the `server start-dev` command. ```bash temporal server start-dev ``` This command automatically starts the Web UI, creates the default [Namespace](/namespaces), and uses an in-memory database. The Temporal Server should be available on `localhost:7233` and the Temporal Web UI should be accessible at [`http://localhost:8233`](http://localhost:8233/). The server's startup configuration can be customized using command line options. For a full list of options, run: ```bash temporal server start-dev --help ``` ## How to install a Temporal SDK {#install-a-temporal-sdk} A [Temporal SDK](/encyclopedia/temporal-sdks) provides a framework for [Temporal Application](/temporal#temporal-application) development. An SDK provides you with the following: - A [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) to communicate with a [Temporal Service](/temporal-service). - APIs to develop [Workflows](/workflows). - APIs to create and manage [Worker Processes](/workers#worker). - APIs to author [Activities](/activity-definition). Add the [Temporal Go SDK](https://github.com/temporalio/sdk-go) to your project: ```bash go get go.temporal.io/sdk ``` Or clone the Go SDK repo to your preferred location: ```bash git clone git@github.com:temporalio/sdk-go.git ``` ### How to find the Go SDK API reference {#api-reference} The Temporal Go SDK API reference is published on [pkg.go.dev](https://pkg.go.dev/go.temporal.io/sdk). - Short link: [`t.mp/go-api`](https://t.mp/go-api) ### Where are SDK-specific code examples? {#code-samples} You can find a complete list of executable code samples in [Temporal's GitHub repository](https://github.com/temporalio?q=samples-&type=all&language=&sort=). Additionally, several of the [Tutorials](https://learn.temporal.io) are backed by a fully executable template application. - [Go Samples repo](https://github.com/temporalio/samples-go#samples-directory) - [Background Check application](https://github.com/temporalio/background-checks): Provides a non-trivial Temporal Application implementation in conjunction with [application documentation](https://learn.temporal.io/examples/go/background-checks/). - [Hello world application template in Go](https://github.com/temporalio/hello-world-project-template-go): Provides a quick-start development app for users. This sample works in conjunction with the ["Hello World!" from scratch tutorial in Go](https://learn.temporal.io/getting_started/go/hello_world_in_go/). - [Money transfer application template in Go](https://github.com/temporalio/money-transfer-project-template-go): Provides a quick-start development app for users. It demonstrates a basic "money transfer" Workflow Definition and works in conjunction with the [Run your first app tutorial in Go](https://learn.temporal.io/getting_started/go/first_program_in_go/). - [Subscription-style Workflow Definition in Go](https://github.com/temporalio/subscription-workflow-project-template-go): Demonstrates some of the patterns that could be implemented for a subscription-style business process. - [eCommerce application example in Go](https://github.com/temporalio/temporal-ecommerce): Showcases a per-user shopping cart–style Workflow Definition that includes an API for adding and removing items from the cart as well as a web UI. This application sample works in conjunction with the [eCommerce in Go tutorial](https://learn.temporal.io/tutorials/go/build-an-ecommerce-app). ## How to develop a basic Workflow {#develop-workflows} Workflows are the fundamental unit of a Temporal Application, and it all starts with the development of a [Workflow Definition](/workflow-definition). In the Temporal Go SDK programming model, a [Workflow Definition](/workflow-definition) is an exportable function. Below is an example of a basic Workflow Definition. View the source code {' '} in the context of the rest of the application code. ```go package yourapp "time" "go.temporal.io/sdk/workflow" ) // ... // YourSimpleWorkflowDefinition is the most basic Workflow Definition. func YourSimpleWorkflowDefinition(ctx workflow.Context) error { // ... return nil } ``` ### How to define Workflow parameters {#workflow-parameters} Temporal Workflows may have any number of custom parameters. However, we strongly recommend that objects are used as parameters, so that the object's individual fields may be altered without breaking the signature of the Workflow. All Workflow Definition parameters must be serializable. The first parameter of a Go-based Workflow Definition must be of the [`workflow.Context`](https://pkg.go.dev/go.temporal.io/sdk/workflow#Context) type. It is used by the Temporal Go SDK to pass around Workflow Execution context, and virtually all the Go SDK APIs that are callable from the Workflow require it. It is acquired from the [`go.temporal.io/sdk/workflow`](https://pkg.go.dev/go.temporal.io/sdk/workflow) package. The `workflow.Context` entity operates similarly to the standard `context.Context` entity provided by Go. The only difference between `workflow.Context` and `context.Context` is that the `Done()` function, provided by `workflow.Context`, returns `workflow.Channel` instead of the standard Go `chan`. Additional parameters can be passed to the Workflow when it is invoked. A Workflow Definition may support multiple custom parameters, or none. These parameters can be regular type variables or safe pointers. However, the best practice is to pass a single parameter that is of a `struct` type, so there can be some backward compatibility if new parameters are added. All Workflow Definition parameters must be serializable and can't be channels, functions, variadic, or unsafe pointers. View the source code {' '} in the context of the rest of the application code. ```go package yourapp "time" "go.temporal.io/sdk/workflow" ) // YourWorkflowParam is the object passed to the Workflow. type YourWorkflowParam struct { WorkflowParamX string WorkflowParamY int } // ... // YourWorkflowDefinition is your custom Workflow Definition. func YourWorkflowDefinition(ctx workflow.Context, param YourWorkflowParam) (*YourWorkflowResultObject, error) { // ... } ``` ### How to define Workflow return parameters {#workflow-return-values} Workflow return values must also be serializable. Returning results, returning errors, or throwing exceptions is fairly idiomatic in each language that is supported. However, Temporal APIs that must be used to get the result of a Workflow Execution will only ever receive one of either the result or the error. A Go-based Workflow Definition can return either just an `error` or a `customValue, error` combination. Again, the best practice here is to use a `struct` type to hold all custom values. A Workflow Definition written in Go can return both a custom value and an error. However, it's not possible to receive both a custom value and an error in the calling process, as is normal in Go. The caller will receive either one or the other. Returning a non-nil `error` from a Workflow indicates that an error was encountered during its execution and the Workflow Execution should be terminated, and any custom return values will be ignored by the system. View the source code {' '} in the context of the rest of the application code. ```go package yourapp "time" "go.temporal.io/sdk/workflow" ) // ... // YourWorkflowResultObject is the object returned by the Workflow. type YourWorkflowResultObject struct { WFResultFieldX string WFResultFieldY int } // ... // YourWorkflowDefinition is your custom Workflow Definition. func YourWorkflowDefinition(ctx workflow.Context, param YourWorkflowParam) (*YourWorkflowResultObject, error) { // ... if err != nil { return nil, err } // Make the results of the Workflow Execution available. workflowResult := &YourWorkflowResultObject{ WFResultFieldX: activityResult.ResultFieldX, WFResultFieldY: activityResult.ResultFieldY, } return workflowResult, nil } ``` ### How to customize Workflow Type in Go {#customize-workflow-type} In Go, by default, the Workflow Type name is the same as the function name. To customize the Workflow Type, set the `Name` parameter with `RegisterOptions` when registering your Workflow with a Worker. View the source code {' '} in the context of the rest of the application code. ```go package main "log" "go.temporal.io/sdk/activity" "go.temporal.io/sdk/client" "go.temporal.io/sdk/worker" "go.temporal.io/sdk/workflow" "documentation-samples-go/yourapp" ) // ... func main() { // ... yourWorker := worker.New(temporalClient, "your-custom-task-queue-name", worker.Options{}) // ... // Use RegisterOptions to set the name of the Workflow Type for example. registerWFOptions := workflow.RegisterOptions{ Name: "JustAnotherWorkflow", } yourWorker.RegisterWorkflowWithOptions(yourapp.YourSimpleWorkflowDefinition, registerWFOptions) // ... } ``` ### How to develop Workflow logic {#workflow-logic-requirements} Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). Therefore, each language is limited to the use of certain idiomatic techniques. However, each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. In Go, Workflow Definition code cannot directly do the following: - Iterate over maps using `range`, because with `range` the order of the map's iteration is randomized. Instead you can collect the keys of the map, sort them, and then iterate over the sorted keys to access the map. This technique provides deterministic results. You can also use a Side Effect or an Activity to process the map instead. - Call an external API, conduct a file I/O operation, talk to another service, etc. (Use an Activity for these.) The Temporal Go SDK has APIs to handle equivalent Go constructs: - `workflow.Now()` This is a replacement for `time.Now()`. - `workflow.Sleep()` This is a replacement for `time.Sleep()`. - `workflow.GetLogger()` This ensures that the provided logger does not duplicate logs during a replay. - `workflow.Go()` This is a replacement for the `go` statement. - `workflow.Channel` This is a replacement for the native `chan` type. Temporal provides support for both buffered and unbuffered channels. - `workflow.Selector` This is a replacement for the `select` statement. Learn more on the [Go SDK Selectors](https://legacy-documentation-sdks.temporal.io/go/selectors) page. - `workflow.Context` This is a replacement for `context.Context`. See [Tracing](/develop/go/observability#tracing-and-context-propagation) for more information about context propagation. ## How to develop an Activity Definition in Go {#activity-definition} In the Temporal Go SDK programming model, an Activity Definition is an exportable function or a `struct` method. Below is an example of both a basic Activity Definition and of an Activity defined as a Struct method. An _Activity struct_ can have more than one method, with each method acting as a separate Activity Type. Activities written as struct methods can use shared struct variables, such as: - an application level DB pool - client connection to another service - reusable utilities - any other expensive resources that you only want to initialize once per process Because this is such a common need, the rest of this guide shows Activities written as `struct` methods. :::note While it is possible to register struct methods as Workflows, this is strongly discouraged. In some cases, struct methods as Workflows may cause non-deterministic errors. We recommend only using struct methods for Activities. ::: View the source code {' '} in the context of the rest of the application code. ```go package yourapp "context" "go.temporal.io/sdk/activity" ) // ... // YourSimpleActivityDefinition is a basic Activity Definition. func YourSimpleActivityDefinition(ctx context.Context) error { return nil } // YourActivityObject is the struct that maintains shared state across Activities. // If the Worker crashes this Activity object loses its state. type YourActivityObject struct { Message *string Number *int } // YourActivityDefinition is your custom Activity Definition. // An Activity Definition is an exportable function. func (a *YourActivityObject) YourActivityDefinition(ctx context.Context, param YourActivityParam) (*YourActivityResultObject, error) { // ... } ``` ### How to develop Activity Parameters {#activity-parameters} There is no explicit limit to the total number of parameters that an [Activity Definition](/activity-definition) may support. However, there is a limit to the total size of the data that ends up encoded into a gRPC message Payload. A single argument is limited to a maximum size of 2 MB. And the total size of a gRPC message, which includes all the arguments, is limited to a maximum of 4 MB. Also, keep in mind that all Payload data is recorded in the [Workflow Execution Event History](/workflow-execution/event#event-history) and large Event Histories can affect Worker performance. This is because the entire Event History could be transferred to a Worker Process with a [Workflow Task](/tasks#workflow-task). {/* TODO link to gRPC limit section when available */} Some SDKs require that you pass context objects, others do not. When it comes to your application data—that is, data that is serialized and encoded into a Payload—we recommend that you use a single object as an argument that wraps the application data passed to Activities. This is so that you can change what data is passed to the Activity without breaking a function or method signature. The first parameter of an Activity Definition is `context.Context`. This parameter is optional for an Activity Definition, though it is recommended, especially if the Activity is expected to use other Go SDK APIs. An Activity Definition can support as many other custom parameters as needed. However, all parameters must be serializable (parameters can't be channels, functions, variadic, or unsafe pointers), and it is recommended to pass a single struct that can be updated later. View the source code {' '} in the context of the rest of the application code. ```go // YourActivityParam is the struct passed to your Activity. // Use a struct so that your function signature remains compatible if fields change. type YourActivityParam struct { ActivityParamX string ActivityParamY int } // ... func (a *YourActivityObject) YourActivityDefinition(ctx context.Context, param YourActivityParam) (*YourActivityResultObject, error) { // ... } ``` ### How to define Activity return values {#activity-return-values} All data returned from an Activity must be serializable. Activity return values are subject to payload size limits in Temporal. The default payload size limit is 2MB, and there is a hard limit of 4MB for any gRPC message size in the Event History transaction ([see Cloud limits here](https://docs.temporal.io/cloud/limits#per-message-grpc-limit)). Keep in mind that all return values are recorded in a [Workflow Execution Event History](/workflow-execution/event#event-history). A Go-based Activity Definition can return either just an `error` or a `customValue, error` combination (same as a Workflow Definition). You may wish to use a `struct` type to hold all custom values, just keep in mind they must all be serializable. View the source code {' '} in the context of the rest of the application code. ```go // YourActivityResultObject is the struct returned from your Activity. // Use a struct so that you can return multiple values of different types. // Additionally, your function signature remains compatible if the fields change. type YourActivityResultObject struct { ResultFieldX string ResultFieldY int } // ... func (a *YourActivityObject) YourActivityDefinition(ctx context.Context, param YourActivityParam) (*YourActivityResultObject, error) { // ... result := &YourActivityResultObject{ ResultFieldX: "Success", ResultFieldY: 1, } // Return the results back to the Workflow Execution. // The results persist within the Event History of the Workflow Execution. return result, nil } ``` ### How to customize Activity Type in Go {#customize-activity-type} To customize the Activity Type, set the `Name` parameter with `RegisterOptions` when registering your Activity with a Worker. View the source code {' '} in the context of the rest of the application code. ```go func main() { // ... yourWorker := worker.New(temporalClient, "your-custom-task-queue-name", worker.Options{}) // ... // Use RegisterOptions to change the name of the Activity Type for example. registerAOptions := activity.RegisterOptions{ Name: "JustAnotherActivity", } yourWorker.RegisterActivityWithOptions(yourapp.YourSimpleActivityDefinition, registerAOptions) // Run the Worker err = yourWorker.Run(worker.InterruptCh()) // ... } // ... ``` ## How to start an Activity Execution {#activity-execution} Calls to spawn [Activity Executions](/activity-execution) are written within a [Workflow Definition](/workflow-definition). The call to spawn an Activity Execution generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command. This results in the set of three [Activity Task](/tasks#activity-task) related Events ([ActivityTaskScheduled](/references/events#activitytaskscheduled), [ActivityTaskStarted](/references/events#activitytaskstarted), and ActivityTask[Closed])in your Workflow Execution Event History. A single instance of the Activities implementation is shared across multiple simultaneous Activity invocations. Activity implementation code should be _idempotent_. The values passed to Activities through invocation parameters or returned through a result value are recorded in the Execution history. The entire Execution history is transferred from the Temporal Service to Workflow Workers when a Workflow state needs to recover. A large Execution history can thus adversely impact the performance of your Workflow. Therefore, be mindful of the amount of data you transfer through Activity invocation parameters or Return Values. Otherwise, no additional limitations exist on Activity implementations. To spawn an [Activity Execution](/activity-execution), call [`ExecuteActivity()`](https://pkg.go.dev/go.temporal.io/sdk/workflow#ExecuteActivity) inside your Workflow Definition. The API is available from the [`go.temporal.io/sdk/workflow`](https://pkg.go.dev/go.temporal.io/sdk/workflow) package. The `ExecuteActivity()` API call requires an instance of `workflow.Context`, the Activity function name, and any variables to be passed to the Activity Execution. The Activity function name can be provided as a variable object (no quotations) or as a string. The benefit of passing the actual function object is that the framework can validate the parameters against the Activity Definition. The `ExecuteActivity` call returns a Future, which can be used to get the result of the Activity Execution. View the source code {' '} in the context of the rest of the application code. ```go func YourWorkflowDefinition(ctx workflow.Context, param YourWorkflowParam) (*YourWorkflowResultObject, error) { // Set the options for the Activity Execution. // Either StartToClose Timeout OR ScheduleToClose is required. // Not specifying a Task Queue will default to the parent Workflow Task Queue. activityOptions := workflow.ActivityOptions{ StartToCloseTimeout: 10 * time.Second, } ctx = workflow.WithActivityOptions(ctx, activityOptions) activityParam := YourActivityParam{ ActivityParamX: param.WorkflowParamX, ActivityParamY: param.WorkflowParamY, } // Use a nil struct pointer to call Activities that are part of a struct. var a *YourActivityObject // Execute the Activity and wait for the result. var activityResult YourActivityResultObject err := workflow.ExecuteActivity(ctx, a.YourActivityDefinition, activityParam).Get(ctx, &activityResult) if err != nil { return nil, err } // ... } ``` ### How to set the required Activity Timeouts {#required-timeout} Activity Execution semantics rely on several parameters. The only required value that needs to be set is either a [Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout) or a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout). These values are set in the Activity Options. To set an Activity Timeout in Go, create an instance of `ActivityOptions` from the `go.temporal.io/sdk/workflow` package, set the Activity Timeout field, and then use the `WithActivityOptions()` API to apply the options to the instance of `workflow.Context`. Available timeouts are: - `StartToCloseTimeout` - `ScheduleToClose` - `ScheduleToStartTimeout` ```go activityOptions := workflow.ActivityOptions{ // Set Activity Timeout duration ScheduleToCloseTimeout: 10 * time.Second, // StartToCloseTimeout: 10 * time.Second, // ScheduleToStartTimeout: 10 * time.Second, } ctx = workflow.WithActivityOptions(ctx, activityOptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` ### Go ActivityOptions reference {#activity-options-reference} Create an instance of [`ActivityOptions`](https://pkg.go.dev/go.temporal.io/sdk/workflow#ActivityOptions) from the `go.temporal.io/sdk/workflow` package and use [`WithActivityOptions()`](https://pkg.go.dev/go.temporal.io/sdk/workflow#WithActivityOptions) to apply it to the instance of `workflow.Context`. The instance of `workflow.Context` is then passed to the `ExecuteActivity()` call. | Field | Required | Type | | --------------------------------------------------- | --------------------------------- | --------------------------------------------------------------------------- | | [`ActivityID`](#activityid) | No | `string` | | [`TaskQueueName`](#taskqueuename) | No | `string` | | [`ScheduleToCloseTimeout`](#scheduletoclosetimeout) | Yes (or `StartToCloseTimeout`) | `time.Duration` | | [`ScheduleToStartTimeout`](#scheduletostarttimeout) | No | `time.Duration` | | [`StartToCloseTimeout`](#scheduletoclosetimeout) | Yes (or `ScheduleToCloseTimeout`) | `time.Duration` | | [`HeartbeatTimeout`](#heartbeattimeout) | No | `time.Duration` | | [`WaitForCancellation`](#waitforcancellation) | No | `bool` | | [`OriginalTaskQueueName`](#originaltaskqueuename) | No | `string` | | [`RetryPolicy`](#retrypolicy) | No | [`RetryPolicy`](https://pkg.go.dev/go.temporal.io/sdk/temporal#RetryPolicy) | #### ActivityID - Type: `string` - Default: None ```go activityOptions := workflow.ActivityOptions{ ActivityID: "your-activity-id", } ctx = workflow.WithActivityOptions(ctx, activityOptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` - [What is an Activity Id](/activity-execution#activity-id) #### TaskQueueName - Type: `string` - Default: Inherits the TaskQueue name from the Workflow. ```go activityOptions := workflow.ActivityOptions{ TaskQueueName: "your-task-queue-name", } ctx = workflow.WithActivityOptions(ctx, activityOptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` - [What is a Task Queue](/task-queue) #### ScheduleToCloseTimeout To set a [Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout), create an instance of `ActivityOptions` from the `go.temporal.io/sdk/workflow` package, set the `ScheduleToCloseTimeout` field, and then use the `WithActivityOptions()` API to apply the options to the instance of `workflow.Context`. This or `StartToCloseTimeout` must be set. - Type: `time.Duration` - Default: ∞ (infinity - no limit) ```go activityOptions := workflow.ActivityOptions{ ScheduleToCloseTimeout: 10 * time.Second, } ctx = workflow.WithActivityOptions(ctx, activityOptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` #### ScheduleToStartTimeout To set a [Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout), create an instance of `ActivityOptions` from the `go.temporal.io/sdk/workflow` package, set the `ScheduleToStartTimeout` field, and then use the `WithActivityOptions()` API to apply the options to the instance of `workflow.Context`. - Type: `time.Duration` - Default: ∞ (infinity - no limit) ```go activityOptions := workflow.ActivityOptions{ ScheduleToStartTimeout: 10 * time.Second, } ctx = workflow.WithActivityOptions(ctx, activityOptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` #### StartToCloseTimeout To set a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout), create an instance of `ActivityOptions` from the `go.temporal.io/sdk/workflow` package, set the `StartToCloseTimeout` field, and then use the `WithActivityOptions()` API to apply the options to the instance of `workflow.Context`. This or `ScheduleToCloseTimeout` must be set. - Type: `time.Duration` - Default: Same as the `ScheduleToCloseTimeout` ```go activityOptions := workflow.ActivityOptions{ StartToCloseTimeout: 10 * time.Second, } ctx = workflow.WithActivityOptions(ctx, activityOptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` #### HeartbeatTimeout To set a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout), create an instance of `ActivityOptions` from the `go.temporal.io/sdk/workflow` package, set the `RetryPolicy` field, and then use the `WithActivityOptions()` API to apply the options to the instance of `workflow.Context`. ```go activityOptions := workflow.ActivityOptions{ HeartbeatTimeout: 10 * time.Second, } ctx = workflow.WithActivityOptions(ctx, activityOptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` #### WaitForCancellation If `true` the Activity Execution will finish executing should there be a Cancellation request. - Type: `bool` - Default: `false` ```go activityOptions := workflow.ActivityOptions{ WaitForCancellation: false, } ctx = workflow.WithActivityOptions(ctx, activityOptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` #### OriginalTaskQueueName ```go activityOptions := workflow.ActivityOptions{ OriginalTaskQueueName: "your-original-task-queue-name", } ctx = workflow.WithActivityOptions(ctx, activityOptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` #### RetryPolicy To set a [RetryPolicy](/encyclopedia/retry-policies), create an instance of `ActivityOptions` from the `go.temporal.io/sdk/workflow` package, set the `RetryPolicy` field, and then use the `WithActivityOptions()` API to apply the options to the instance of `workflow.Context`. - Type: [`RetryPolicy`](https://pkg.go.dev/go.temporal.io/sdk/temporal#RetryPolicy) - Default: ```go retryPolicy := &temporal.RetryPolicy{ InitialInterval: time.Second, BackoffCoefficient: 2.0, MaximumInterval: time.Second * 100, // 100 * InitialInterval MaximumAttempts: 0, // Unlimited NonRetryableErrorTypes: []string, // empty } ``` Providing a Retry Policy here is a customization that overwrites individual Field defaults. ```go retryPolicy := &temporal.RetryPolicy{ InitialInterval: time.Second, BackoffCoefficient: 2.0, MaximumInterval: time.Second * 100, } activityOptions := workflow.ActivityOptions{ RetryPolicy: retryPolicy, } ctx = workflow.WithActivityOptions(ctx, activityOptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` ### How to get the results of an Activity Execution {#get-activity-results} The call to spawn an [Activity Execution](/activity-execution) generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command and provides the Workflow with an Awaitable. Workflow Executions can either block progress until the result is available through the Awaitable or continue progressing, making use of the result when it becomes available. The `ExecuteActivity` API call returns an instance of [`workflow.Future`](https://pkg.go.dev/go.temporal.io/sdk/workflow#Futures) which has the following two methods: - `Get()`: Takes an instance of the `workflow.Context`, that was passed to the Activity Execution, and a pointer as parameters. The variable associated with the pointer is populated with the Activity Execution result. This call blocks until the results are available. - `IsReady()`: Returns `true` when the result of the Activity Execution is ready. Call the `Get()` method on the instance of `workflow.Future` to get the result of the Activity Execution. The type of the result parameter must match the type of the return value declared by the Activity function. ```go func YourWorkflowDefinition(ctx workflow.Context, param YourWorkflowParam) (YourWorkflowResponse, error) { // ... future := workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam) var yourActivityResult YourActivityResult if err := future.Get(ctx, &yourActivityResult); err != nil { // ... } // ... } ``` Use the `IsReady()` method first to make sure the `Get()` call doesn't cause the Workflow Execution to wait on the result. ```go func YourWorkflowDefinition(ctx workflow.Context, param YourWorkflowParam) (YourWorkflowResponse, error) { // ... future := workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam) // ... if(future.IsReady()) { var yourActivityResult YourActivityResult if err := future.Get(ctx, &yourActivityResult); err != nil { // ... } } // ... } ``` It is idiomatic to invoke multiple Activity Executions from within a Workflow. Therefore, it is also idiomatic to either block on the results of the Activity Executions or continue on to execute additional logic, checking for the Activity Execution results at a later time. ## How to develop a Worker in Go {#develop-worker} Create an instance of [`Worker`](https://pkg.go.dev/go.temporal.io/sdk/worker#Worker) by calling [`worker.New()`](https://pkg.go.dev/go.temporal.io/sdk/worker#New), available through the `go.temporal.io/sdk/worker` package, and pass it the following parameters: 1. An instance of the Temporal Go SDK `Client`. 1. The name of the Task Queue that it will poll. 1. An instance of `worker.Options`, which can be empty. Then, register the Workflow Types and the Activity Types that the Worker will be capable of executing. Lastly, call either the `Start()` or the `Run()` method on the instance of the Worker. Run accepts an interrupt channel as a parameter, so that the Worker can be stopped in the terminal. Otherwise, the `Stop()` method must be called to stop the Worker. :::tip If you have [`gow`](https://github.com/mitranim/gow) installed, the Worker Process automatically "reloads" when you update the Worker file: ```bash go install github.com/mitranim/gow@latest gow run worker/main.go # automatically reloads when file changes ``` ::: View the source code {' '} in the context of the rest of the application code. ```go package main "log" "go.temporal.io/sdk/activity" "go.temporal.io/sdk/client" "go.temporal.io/sdk/worker" "go.temporal.io/sdk/workflow" "documentation-samples-go/yourapp" ) func main() { // Create a Temporal Client // A Temporal Client is a heavyweight object that should be created just once per process. temporalClient, err := client.Dial(client.Options{}) if err != nil { log.Fatalln("Unable to create client", err) } defer temporalClient.Close() // Create a new Worker. yourWorker := worker.New(temporalClient, "your-custom-task-queue-name", worker.Options{}) // Register your Workflow Definitions with the Worker. // Use the RegisterWorkflow or RegisterWorkflowWithOptions method for each Workflow registration. yourWorker.RegisterWorkflow(yourapp.YourWorkflowDefinition) // ... // Register your Activity Definitons with the Worker. // Use this technique for registering all Activities that are part of a struct and set the shared variable values. message := "This could be a connection string or endpoint details" number := 100 activities := &yourapp.YourActivityObject{ Message: &message, Number: &number, } // Use the RegisterActivity or RegisterActivityWithOptions method for each Activity. yourWorker.RegisterActivity(activities) // ... // Run the Worker err = yourWorker.Run(worker.InterruptCh()) if err != nil { log.Fatalln("Unable to start Worker", err) } } // ... ``` ### How to set WorkerOptions in Go {#workeroptions} Create an instance of [`Options`](https://pkg.go.dev/go.temporal.io/sdk/worker#Options) from the `go.temporal.io/sdk/worker` package, set any of the optional fields, and pass the instance to the [`New`](https://pkg.go.dev/go.temporal.io/sdk/worker#New) call. | Field | Required | Type | | ------------------------------------------------------------------------------------- | -------- | --------------------------------------------------------------------------------------------- | | [`MaxConcurrentActivityExecutionSize`](#maxconcurrentactivityexecutionsize) | No | `int` | | [`WorkerActivitiesPerSecond`](#workeractivitiespersecond) | No | `float64` | | [`MaxConcurrentLocalActivityExecutionSize`](#maxconcurrentlocalactivityexecutionsize) | No | `int` | | [`WorkerLocalActivitiesPerSecond`](#workerlocalactivitiespersecond) | No | `float64` | | [`TaskQueueActivitiesPerSecond`](#taskqueueactivitiespersecond) | No | `float64` | | [`MaxConcurrentActivityTaskPollers`](#maxconcurrentactivitytaskpollers) | No | `int` | | [`MaxConcurrentWorkflowTaskExecutionSize`](#maxconcurrentworkflowtaskexecutionsize) | No | `int` | | [`MaxConcurrentWorkflowTaskPollers`](#maxconcurrentworkflowtaskpollers) | No | `int` | | [`EnableLoggingInReplay`](#enablelogginginreplay) | No | `bool` | | [`DisableStickyExecution`](#disablestickyexecution) | No | `bool` | | [`StickyScheduleToStartTimeout`](#stickyscheduletostarttimeout) | No | [`time.Duration`](https://pkg.go.dev/time#Duration) | | [`BackgroundActivityContext`](#backgroundactivitycontext) | No | [`context.Context`](https://pkg.go.dev/context#Context) | | [`WorkflowPanicPolicy`](#workflowpanicpolicy) | No | [`WorkflowPanicPolicy`](https://pkg.go.dev/go.temporal.io/sdk/internal#WorkflowPanicPolicy) | | [`WorkerStopTimeout`](#workerstoptimeout) | No | [`time.Duration`](https://pkg.go.dev/time#Duration) | | [`EnableSessionWorker`](#enablesessionworker) | No | `bool` | | [`MaxConcurrentSessionExecutionSize`](#maxconcurrentsessionexecutionsize) | No | `int` | | [`WorkflowInterceptorChainFactories`](#workflowinterceptorchainfactories) | No | [`[]WorkflowInterceptor`](https://pkg.go.dev/go.temporal.io/sdk/internal#WorkflowInterceptor) | | [`LocalActivityWorkerOnly`](#localactivityworkeronly) | No | `bool` | | [`Identity`](#identity) | No | `string` | | [`DeadlockDetectionTimeout`](#deadlockdetectiontimeout) | No | [`time.Duration`](https://pkg.go.dev/time#Duration) | #### MaxConcurrentActivityExecutionSize Sets the maximum concurrent Activity Executions for the Worker. - Type: `int` - Default: `1000` A value of `0` sets to the default. ```go // ... workerOptions := worker.Options{ MaxConcurrentActivityExecutionSize: 1000, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### WorkerActivitiesPerSecond Rate limits the number of Activity Task Executions started per second for the Worker. - Type: `float64` - Default: `100000` A value of `0` sets to the default. Intended use case is to limit resources used by the Worker. Notice that the value type is a float so that the value can be less than 1 if needed. For example, if set to 0.1, Activity Task Executions will happen once every ten seconds. This can be used to protect down stream services from flooding with requests. ```go // ... workerOptions := worker.Options{ WorkerActivitiesPerSecond: 100000, // .. } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### MaxConcurrentLocalActivityExecutionSize Set the maximum concurrent [Local Activity Executions](/local-activity) for the Worker. - Type: `int` - Default: `1000` A value of `0` sets to the default value. ```go // ... workerOptions := worker.Options{ MaxConcurrentLocalActivityExecutionSize: 1000, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### WorkerLocalActivitiesPerSecond Rate limits the number of Local Activity Executions per second executed for the Worker. - Type: `float64` - Default: `100000` A value of `0` sets to the default value. Intended use case is to limit resources used by the Worker. Notice that the value type is a float so that the value can be less than 1 if needed. For example, if set to 0.1, Local Activity Task Executions will happen once every ten seconds. This can be used to protect down stream services from flooding with requests. ```go // ... workerOptions := worker.Options{ WorkerLocalActivitiesPerSecond: 100000, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### TaskQueueActivitiesPerSecond Rate limits the number of Activity Executions that can be started per second. - Type: `float64` - Default: `100000` A value of `0` sets to the default value. This rate is managed by the Temporal Service and limits the Activity Tasks per second for the entire Task Queue. This is in contrast to [`WorkerActivityTasksPerSecond`](#workeractivitiespersecond) controls Activities only per Worker. Notice that the number is represented in float, so that you can set it to less than 1 if needed. For example, set the number to 0.1 means you want your Activity to be executed once for every 10 seconds. This can be used to protect down stream services from flooding. ```go // ... workerOptions := worker.Options{ TaskQueueActivitiesPerSecond: 100000, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### MaxConcurrentActivityTaskPollers Sets the maximum number of goroutines to concurrently poll the Task Queue for Activity Tasks. - Type: `int` - Default: `2` Changing this value will affect the rate at which the Worker is able to consume Activity Tasks from the Task Queue. ```go // ... workerOptions := worker.Options{ MaxConcurrentActivityTaskPollers: 2, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### MaxConcurrentWorkflowTaskExecutionSize Sets the maximum number of concurrent Workflow Task Executions the Worker can have. - Type: `int` - Default: `1000` A value of `0` sets to the default value. ```go // ... workerOptions := worker.Options{ MaxConcurrentWorkflowTaskExecutionSize: 1000, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### MaxConcurrentWorkflowTaskPollers Sets the maximum number of goroutines that will concurrently poll the Task Queue for Workflow Tasks. - Type: `int` - Default: `2` Changing this value will affect the rate at which the Worker is able to consume Workflow Tasks from the Task Queue. ```go // ... workerOptions := worker.Options{ MaxConcurrentWorkflowTaskPollers: 2, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### EnableLoggingInReplay Set to enable logging in Workflow Execution replays. - type: `bool` - Default: `false` In Workflow Definitions you can use [`workflow.GetLogger(ctx)`](https://pkg.go.dev/go.temporal.io/sdk/workflow#GetLogger) to write logs. By default, the logger will skip logging during replays, so you do not see duplicate logs. This is only really useful for debugging purpose. ```go // ... workerOptions := worker.Options{ EnableLoggingInReplay: false, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### DisableStickyExecution :::caution Deprecated When DisableStickyExecution is `true` it can harm performance. It will be removed soon. See [`SetStickyWorkflowCacheSize`](https://pkg.go.dev/go.temporal.io/sdk/worker#SetStickyWorkflowCacheSize) instead. ::: Set to disable Sticky Executions - Type: `bool` - Default: `false` Sticky Execution runs Workflow Tasks of a Workflow Execution on same host (could be a different Worker, as long as it is on the same host). This is an optimization for Workflow Executions. When sticky execution is enabled, Worker keeps the Workflow state in memory. New Workflow Task contains the new history events will be dispatched to the same Worker. If this Worker crashes, the sticky Workflow Task will timeout after `StickyScheduleToStartTimeout`, and Temporal Service will clear the stickiness for that Workflow Execution and automatically reschedule a new Workflow Task that is available for any Worker to pick up and resume the progress. ```go // ... workerOptions := worker.Options{ StickyScheduleToStartTimeout: time.Second(5), // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### StickyScheduleToStartTimeout Sets the Sticky Execution Schedule-To-Start Timeout for Workflow Tasks. - Type: [`time.Duration`](https://pkg.go.dev/time#Duration) - Default value is `5` The resolution is in seconds. ```go // ... workerOptions := worker.Options{ StickyScheduleToStartTimeout: time.Second(5), // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### BackgroundActivityContext :::caution Not recommended This method of passing dependencies between Activity Task Executions is not recommended anymore. Instead, we recommend using a struct with fields that contain dependencies and develop Activity Definitions as struct methods and then pass all the dependencies on the structure initialization. - [How to develop an Activity Definition using the Go SDK](#activity-definition) ::: - Type: [`context.Context`](https://pkg.go.dev/context#Context) Sets the background `context.Context` for all Activity Types registered with the Worker. The context can be used to pass external dependencies such as database connections to Activity Task Executions. ```go // ... ctx := context.WithValue(context.Background(), "your-key", "your-value") workerOptions := worker.Options{ BackgroundActivityContext: ctx, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### WorkflowPanicPolicy Sets how the Workflow Worker handles a non-deterministic Workflow Execution History Event and other panics from Workflow Definition code. - Type: [`WorkflowPanicPolicy`](https://pkg.go.dev/go.temporal.io/sdk/internal#WorkflowPanicPolicy) - Default: `BlockWorkflow` ```go // ... workerOptions := worker.Options{ DisableStickyExecution: internal.BlockWorkflow, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### WorkerStopTimeout Sets the Worker's graceful stop timeout - Type: [`time.Duration`](https://pkg.go.dev/time#Duration) - Default: `0` Value resolution is in seconds. ```go // ... workerOptions := worker.Options{ WorkerStopTimeout: time.Second(0), // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### EnableSessionWorker Enables Sessions for Activity Workers. - Type: `bool` - Default: `false` When `true` the Activity Worker creates a Session to sequentially process Activity Tasks for the given Task Queue. ```go // ... workerOptions := worker.Options{ EnableSessionWorker: true, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### MaxConcurrentSessionExecutionSize Sets the maximum number of concurrent Sessions that the Worker can support. - Type: `int` - Default: 1000 ```go // ... workerOptions := worker.Options{ MaxConcurrentSessionExecutionSize: 1000, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### WorkflowInterceptorChainFactories Specifies the factories used to instantiate the Workflow interceptor chain. - Type: [`[]WorkflowInterceptor`](https://pkg.go.dev/go.temporal.io/sdk/internal#WorkflowInterceptor) The chain is instantiated for each replay of a Workflow Execution. #### LocalActivityWorkerOnly Sets the Worker to only handle Workflow Tasks and local Activity Tasks. - Type: `bool` - Default: `false` ```go // ... workerOptions := worker.Options{ LocalActivityWorkerOnly: 1000, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### Identity Sets the Temporal Client-level Identity value, overwriting the existing one. - Type: string - Default: client identity ```go // ... workerOptions := worker.Options{ Identity: "your_custom_identity", // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` #### DeadlockDetectionTimeout Sets the maximum time that a Workflow Task can execute for. - Type: [`time.Duration`](https://pkg.go.dev/time#Duration) - Default: 1 Resolution is in seconds. ```go // ... workerOptions := worker.Options{ DeadlockDetectionTimeout: time.Second(1), // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` ## How to run a Temporal Cloud Worker {#run-a-temporal-cloud-worker} To run a Worker that uses [Temporal Cloud](/cloud), you need to provide additional connection and client options that include the following: - An address that includes your [Cloud Namespace Name](/namespaces) and a port number: `..tmprl.cloud:`. - mTLS CA certificate. - mTLS private key. For more information about managing and generating client certificates for Temporal Cloud, see [How to manage certificates in Temporal Cloud](/cloud/certificates). For more information about configuring TLS to secure inter- and intra-network communication for a Temporal Service, see [Temporal Customization Samples](https://github.com/temporalio/samples-server). To run a Worker that talks to Temporal Cloud, you need the following: - A compatible mTLS CA certificate and mTLS private key that has been added to your Namespace. See [certificate requirements](/cloud/certificates#certificate-requirements). - Your [Temporal Cloud Namespace Id](/cloud/namespaces#temporal-cloud-namespace-id), which includes your [Temporal Cloud Namespace Name](/cloud/namespaces#temporal-cloud-namespace-name) and the unique five- or six-digit [Temporal Cloud Account Id](/cloud/namespaces#temporal-cloud-account-id) that is appended to it. This information can be found in the URL of your Namespace; for example, `https://cloud.temporal.io/namespaces/yournamespace.a2fx6/`. Remember that the Namespace Id must include the Account Id: `yournamespace.a2fx6`. For more information about managing and generating client certificates for Temporal Cloud, see [How to manage certificates in Temporal Cloud](/cloud/certificates). For more information about configuring TLS to secure inter- and intra-network communication for a Temporal Service, see [Temporal Customization Samples](https://github.com/temporalio/samples-server). View the source code {' '} in the context of the rest of the application code. ```go package main "crypto/tls" "log" "go.temporal.io/sdk/client" "go.temporal.io/sdk/worker" "documentation-samples-go/cloud" ) func main() { // Get the key and cert from your env or local machine clientKeyPath := "./secrets/yourkey.key" clientCertPath := "./secrets/yourcert.pem" // Specify the host and port of your Temporal Cloud Namespace // Host and port format: namespace.unique_id.tmprl.cloud:port hostPort := "..tmprl.cloud:7233" namespace := "." // Use the crypto/tls package to create a cert object cert, err := tls.LoadX509KeyPair(clientCertPath, clientKeyPath) if err != nil { log.Fatalln("Unable to load cert and key pair.", err) } // Add the cert to the tls certificates in the ConnectionOptions of the Client temporalClient, err := client.Dial(client.Options{ HostPort: hostPort, Namespace: namespace, ConnectionOptions: client.ConnectionOptions{ TLS: &tls.Config{Certificates: []tls.Certificate{cert}}, }, }) if err != nil { log.Fatalln("Unable to connect to Temporal Cloud.", err) } defer temporalClient.Close() // Create a new Worker. yourWorker := worker.New(temporalClient, "cloud-connection-example-go-task-queue", worker.Options{}) // ... } ``` ### How to register types {#register-types} All Workers listening to the same Task Queue name must be registered to handle the exact same Workflows Types and Activity Types. If a Worker polls a Task for a Workflow Type or Activity Type it does not know about, it fails that Task. However, the failure of the Task does not cause the associated Workflow Execution to fail. The `RegisterWorkflow()` and `RegisterActivity()` calls essentially create an in-memory mapping between the Workflow Types and their implementations, inside the Worker process. **Registering Activity `structs`** Per [Activity Definition](#activity-definition) best practices, you might have an Activity struct that has multiple methods and fields. When you use `RegisterActivity()` for an Activity struct, that Worker has access to all exported methods. **Registering multiple Types** To register multiple Activity Types and/or Workflow Types with the Worker Entity, just make multiple Activity registration calls, but make sure each Type name is unique: ```go w.RegisterActivity(ActivityA) w.RegisterActivity(ActivityB) w.RegisterActivity(ActivityC) w.RegisterWorkflow(WorkflowA) w.RegisterWorkflow(WorkflowB) w.RegisterWorkflow(WorkflowC) ``` ### How to set RegisterWorkflowOptions in Go {#registerworkflowoptions} Create an instance of [`RegisterOptions`](https://pkg.go.dev/go.temporal.io/sdk/workflow#RegisterOptions) from the `go.temporal.io/sdk/workflow` package and pass it to the [`RegisterWorkflowWithOptions`](https://pkg.go.dev/go.temporal.io/sdk/worker#WorkflowRegistry) call when registering the Workflow Type with the Worker. - Used to set options for registering a Workflow | Field | Required | Type | | ----------------------------------------------------------------- | -------- | -------- | | [`Name`](#name) | No | `string` | | [`DisableAlreadyRegisteredCheck`](#disablealreadyregisteredcheck) | No | `bool` | #### Name See [How to customize a Workflow Type in Go](#customize-workflow-type) #### DisableAlreadyRegisteredCheck Disables the check to see if the Workflow Type has already been registered. - Type: `bool` - Default: `false` ```go // ... w := worker.New(temporalClient, "your_task_queue_name", worker.Options{}) registerOptions := workflow.RegisterOptions{ DisableAlreadyRegisteredCheck: `false`, // ... } w.RegisterWorkflowWithOptions(YourWorkflowDefinition, registerOptions) // ... ``` ### How to set RegisterActivityOptions in Go {#registeractivityoptions} Create an instance of [`RegisterOptions`](https://pkg.go.dev/go.temporal.io/sdk/activity#RegisterOptions) from the `go.temporal.io/sdk/activity` package and pass it to the [`RegisterActivityWithOptions`](https://pkg.go.dev/go.temporal.io/sdk/worker#ActivityRegistry) call when registering the Activity Type with the Worker. Options for registering an Activity | Field | Required | Type | | ----------------------------------------------------------------- | -------- | -------- | | [`Name`](#name) | No | `string` | | [`DisableAlreadyRegisteredCheck`](#disablealreadyregisteredcheck) | No | `bool` | | [`SkipInvalidStructFunctions`](#skipinvalidstructfunctions) | No | `bool` | #### Name See [How to customize Activity Type in Go](#customize-activity-type). #### DisableAlreadyRegisteredCheck Disables the check to see if the Activity has already been registered. - Type: `bool` - Default: `false` ```go // ... w := worker.New(temporalClient, "your_task_queue_name", worker.Options{}) registerOptions := activity.RegisterOptions{ DisableAlreadyRegisteredCheck: false, // ... } w.RegisterActivityWithOptions(a.YourActivityDefinition, registerOptions) // ... ``` #### SkipInvalidStructFunctions When registering a struct that has Activities, skip functions that are not valid. If false, registration panics. - Type: `bool` - Default: `false` ```go // ... w := worker.New(temporalClient, "your_task_queue_name", worker.Options{}) registerOptions := activity.RegisterOptions{ SkipInvalidStructFunctions: false, // ... } w.RegisterActivityWithOptions(a.YourActivityDefinition, registerOptions) // ... ``` ## Set a Dynamic Workflow {#set-a-dynamic-workflow} **How to set a Dynamic Workflow using the Temporal Go SDK** A Dynamic Workflow in Temporal is a Workflow that is invoked dynamically at runtime if no other Workflow with the same name is registered. A Workflow can be registered as dynamic by using `worker.RegisterDynamicWorkflow()`. You must register the Workflow with the Worker before it can be invoked. Only one Dynamic Workflow can be present on a Worker. The Workflow Definition must then accept a single argument of type `converter.EncodedValues`. This code snippet is taken from the [Dynamic Workflow example from samples-go](https://github.com/temporalio/samples-go/tree/main/dynamic-workflows). ```go func DynamicWorkflow(ctx workflow.Context, args converter.EncodedValues) (string, error) { var result string info := workflow.GetInfo(ctx) var arg1, arg2 string err := args.Get(&arg1, &arg2) if err != nil { return "", fmt.Errorf("failed to decode arguments: %w", err) } if info.WorkflowType.Name == "dynamic-activity" { ctx = workflow.WithActivityOptions(ctx, workflow.ActivityOptions{StartToCloseTimeout: 10 * time.Second}) err := workflow.ExecuteActivity(ctx, "random-activity-name", arg1, arg2).Get(ctx, &result) if err != nil { return "", err } } else { result = fmt.Sprintf("%s - %s - %s", info.WorkflowType.Name, arg1, arg2) } return result, nil } ``` ## Set a Dynamic Activity {#set-a-dynamic-activity} **How to set a Dynamic Activity using the Temporal Go SDK** A Dynamic Activity in Temporal is an Activity that is invoked dynamically at runtime if no other Activity with the same name is registered. An Activity can be registered as dynamic by using `worker.RegisterDynamicActivity()`. You must register the Activity with the Worker before it can be invoked. Only one Dynamic Activity can be present on a Worker. The Activity Definition must then accept a single argument of type `converter.EncodedValues`. This code snippet is taken from the [Dynamic Workflow example from samples-go](https://github.com/temporalio/samples-go/tree/main/dynamic-workflows). ```go func DynamicActivity(ctx context.Context, args converter.EncodedValues) (string, error) { var arg1, arg2 string err := args.Get(&arg1, &arg2) if err != nil { return "", fmt.Errorf("failed to decode arguments: %w", err) } info := activity.GetInfo(ctx) result := fmt.Sprintf("%s - %s - %s", info.WorkflowType.Name, arg1, arg2) return result, nil } ``` --- ## Debugging - Go SDK You can use a debugger tool provided by your favorite IDE to debug your Workflow Definitions prior to testing or executing them. The Temporal Go SDK includes deadlock detection which fails a Workflow Task in case the code blocks over a second without relinquishing execution control. Because of this you can often encounter a `PanicError: Potential deadlock detected` while stepping through Workflow Definitions during debugging. To alleviate this issue, you can set the `TEMPORAL_DEBUG` environment variable to `true` before debugging your Workflow Definition. :::note Make sure to set `TEMPORAL_DEBUG` to true only during debugging. ::: ## How to debug in a development environment {#debug-in-a-development-environment} In addition to the normal development tools of logging and a debugger, you can also see what's happening in your Workflow by using the [Web UI](/web-ui) or [Temporal CLI](/cli). ## How to debug in a production environment {#debug-in-a-production-environment} You can debug production Workflows using: - [Web UI](/web-ui) - [Temporal CLI](/cli) - [Replay](/develop/go/testing-suite#replay) - [Tracing](/develop/go/observability#tracing-and-context-propagation) - [Logging](/develop/go/observability#logging) You can debug and tune Worker performance with metrics and the [Worker performance guide](/develop/worker-performance). For more information, see [Metrics](/develop/go/observability#metrics) for setting up SDK metrics. Debug Server performance with [Cloud metrics](/cloud/metrics/) or [self-hosted Server metrics](/self-hosted-guide/production-checklist#scaling-and-metrics). ## How to test Workflow Definitions in Go {#testing-and-debugging} The Temporal Go SDK provides a test framework to facilitate testing Workflow implementations. This framework is suited for implementing unit tests as well as functional tests of the Workflow logic. The following code implements unit tests for the `SimpleWorkflow` sample: ```go package sample "context" "errors" "testing" "github.com/stretchr/testify/mock" "github.com/stretchr/testify/suite" "go.temporal.io/sdk/activity" "go.temporal.io/sdk/testsuite" ) type UnitTestSuite struct { suite.Suite testsuite.WorkflowTestSuite env *testsuite.TestWorkflowEnvironment } func (s *UnitTestSuite) SetupTest() { s.env = s.NewTestWorkflowEnvironment() } func (s *UnitTestSuite) AfterTest(suiteName, testName string) { s.env.AssertExpectations(s.T()) } func (s *UnitTestSuite) Test_SimpleWorkflow_Success() { s.env.ExecuteWorkflow(SimpleWorkflow, "test_success") s.True(s.env.IsWorkflowCompleted()) s.NoError(s.env.GetWorkflowError()) } func (s *UnitTestSuite) Test_SimpleWorkflow_ActivityParamCorrect() { s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return( func(ctx context.Context, value string) (string, error) { s.Equal("test_success", value) return value, nil }) s.env.ExecuteWorkflow(SimpleWorkflow, "test_success") s.True(s.env.IsWorkflowCompleted()) s.NoError(s.env.GetWorkflowError()) } func (s *UnitTestSuite) Test_SimpleWorkflow_ActivityFails() { s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return( "", errors.New("SimpleActivityFailure")) s.env.ExecuteWorkflow(SimpleWorkflow, "test_failure") s.True(s.env.IsWorkflowCompleted()) err := s.env.GetWorkflowError() s.Error(err) var applicationErr *temporal.ApplicationError s.True(errors.As(err, &applicationErr)) s.Equal("SimpleActivityFailure", applicationErr.Error()) } func TestUnitTestSuite(t *testing.T) { suite.Run(t, new(UnitTestSuite)) } ``` #### Setup To run unit tests, we first define a test suite struct that absorbs both the basic suite functionality from [testify](https://pkg.go.dev/github.com/stretchr/testify/suite) via `suite.Suite` and the suite functionality from the Temporal test framework via `testsuite.WorkflowTestSuite`. Because every test in this test suite will test our Workflow, we add a property to our struct to hold an instance of the test environment. This allows us to initialize the test environment in a setup method. For testing Workflows, we use a `testsuite.TestWorkflowEnvironment`. Next, we implement a `SetupTest` method to set up a new test environment before each test. Doing so ensures that each test runs in its own isolated sandbox. We also implement an `AfterTest` function where we assert that all the mocks we set up were indeed called by invoking `s.env.AssertExpectations(s.T())`. Timeout for the entire test can be set using `SetTestTimeout` in the Workflow or Activity environment. Finally, we create a regular test function recognized by the `go test` command, and pass the struct to `suite.Run`. #### A Simple Test The simplest test case we can write is to have the test environment execute the Workflow and then evaluate the results. ```go func (s *UnitTestSuite) Test_SimpleWorkflow_Success() { s.env.ExecuteWorkflow(SimpleWorkflow, "test_success") s.True(s.env.IsWorkflowCompleted()) s.NoError(s.env.GetWorkflowError()) } ``` Calling `s.env.ExecuteWorkflow(...)` executes the Workflow logic and any invoked Activities inside the test process. The first parameter of `s.env.ExecuteWorkflow(...)` contains the Workflow functions, and any subsequent parameters contain values for custom input parameters declared by the Workflow function. > Note that unless the Activity invocations are mocked or Activity implementation > replaced (see [Activity mocking and overriding](#activity-mocking-and-overriding)), the test environment > will execute the actual Activity code including any calls to outside services. After executing the Workflow in the above example, we assert that the Workflow ran through completion via the call to `s.env.IsWorkflowComplete()`. We also assert that no errors were returned by asserting on the return value of `s.env.GetWorkflowError()`. If our Workflow returned a value, we could have retrieved that value via a call to `s.env.GetWorkflowResult(&value)` and had additional asserts on that value. #### Activity mocking and overriding When running unit tests on Workflows, we want to test the Workflow logic in isolation. Additionally, we want to inject Activity errors during our test runs. The test framework provides two mechanisms that support these scenarios: Activity mocking and Activity overriding. Both of these mechanisms allow you to change the behavior of Activities invoked by your Workflow without the need to modify the actual Workflow code. Let's take a look at a test that simulates a test that fails via the "Activity mocking" mechanism. ```go func (s *UnitTestSuite) Test_SimpleWorkflow_ActivityFails() { s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return( "", errors.New("SimpleActivityFailure")) s.env.ExecuteWorkflow(SimpleWorkflow, "test_failure") s.True(s.env.IsWorkflowCompleted()) err := s.env.GetWorkflowError() s.Error(err) var applicationErr *temporal.ApplicationError s.True(errors.As(err, &applicationErr)) s.Equal("SimpleActivityFailure", applicationErr.Error()) } ``` This test simulates the execution of the Activity `SimpleActivity` that is invoked by our Workflow `SimpleWorkflow` returning an error. We accomplish this by setting up a mock on the test environment for the `SimpleActivity` that returns an error. ```go s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return( "", errors.New("SimpleActivityFailure")) ``` With the mock set up we can now execute the Workflow via the `s.env.ExecuteWorkflow(...)` method and assert that the Workflow completed successfully and returned the expected error. Simply mocking the execution to return a desired value or error is a pretty powerful mechanism to isolate Workflow logic. However, sometimes we want to replace the Activity with an alternate implementation to support a more complex test scenario. Let's assume we want to validate that the Activity gets called with the correct parameters. ```go func (s *UnitTestSuite) Test_SimpleWorkflow_ActivityParamCorrect() { s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return( func(ctx context.Context, value string) (string, error) { s.Equal("test_success", value) return value, nil }) s.env.ExecuteWorkflow(SimpleWorkflow, "test_success") s.True(s.env.IsWorkflowCompleted()) s.NoError(s.env.GetWorkflowError()) } ``` In this example, we provide a function implementation as the parameter to `Return`. This allows us to provide an alternate implementation for the Activity `SimpleActivity`. The framework will execute this function whenever the Activity is invoked and pass on the return value from the function as the result of the Activity invocation. Additionally, the framework will validate that the signature of the "mock" function matches the signature of the original Activity function. Since this can be an entire function, there is no limitation as to what we can do here. In this example, we assert that the `value` param has the same content as the value param we passed to the Workflow. #### Queries `TestWorkflowEnvironment` instances have a [`QueryWorkflow()` method](https://pkg.go.dev/go.temporal.io/temporal/internal#TestWorkflowEnvironment.QueryWorkflow) that lets you query the state of the currently running Workflow. For example, suppose you have a Workflow that lets you query the progress of a long running task as shown below. ```go func ProgressWorkflow(ctx workflow.Context, percent int) error { logger := workflow.GetLogger(ctx) err := workflow.SetQueryHandler(ctx, "getProgress", func(input []byte) (int, error) { return percent, nil }) if err != nil { logger.Info("SetQueryHandler failed.", "Error", err) return err } for percent = 0; percent<100; percent++ { // Important! Use `workflow.Sleep()`, not `time.Sleep()`, because Temporal's // test environment doesn't stub out `time.Sleep()`. workflow.Sleep(ctx, time.Second*1) } return nil } ``` This Workflow tracks the current progress of a task in percentage terms, and increments the percentage by 1 every second. Below is how you would write a test case that queries this Workflow. Note that you should always query the Workflow either after `ExecuteWorkflow()` is done or in a `RegisterDelayedCallback()` callback, otherwise you'll get a `runtime error` panic. ```go func (s *UnitTestSuite) Test_ProgressWorkflow() { value := 0 // After 10 seconds plus padding, progress should be 10. // Note that `RegisterDelayedCallback()` doesn't actually make your test wait for 10 seconds! // Temporal's test framework advances time internally, so this test should take < 1 second. s.env.RegisterDelayedCallback(func() { res, err := s.env.QueryWorkflow("getProgress") s.NoError(err) err = res.Get(&value) s.NoError(err) s.Equal(10, value) }, time.Second*10+time.Millisecond*1) s.env.ExecuteWorkflow(ProgressWorkflow, 0) s.True(s.env.IsWorkflowCompleted()) // Once the workflow is completed, progress should always be 100 res, err := s.env.QueryWorkflow("getProgress") s.NoError(err) err = res.Get(&value) s.NoError(err) s.Equal(value, 100) } ``` :::note `RegisterDelayedCallback` can also be used to send [Signals](/sending-messages#sending-signals). When using "Signal-With-Start", set the delay to `0`. ::: #### Debugging You can use a debugger tool provided by your favorite IDE to debug your Workflow Definitions prior to testing or executing them. The Temporal Go SDK includes deadlock detection which fails a Workflow Task in case the code blocks over a second without relinquishing execution control. Because of this you can often encounter a `PanicError: Potential deadlock detected` while stepping through Workflow Definitions during debugging. To alleviate this issue, you can set the `TEMPORAL_DEBUG` environment variable to `true` before debugging your Workflow Definition. :::note Make sure to set `TEMPORAL_DEBUG` to true only during debugging. ::: --- ## Enriching the User Interface - Go SDK Temporal supports adding context to Workflows and Events with metadata. This helps users identify and understand Workflows and their operations. ## Adding Summary and Details to Workflows ### Starting a Workflow When starting a Workflow, you can provide a static summary and details to help identify the Workflow in the UI: ```go "context" "go.temporal.io/sdk/client" ) func main() { // Create the client c, err := client.Dial(client.Options{}) if err != nil { // Handle error } defer c.Close() // Start workflow options with static summary and details workflowOptions := client.StartWorkflowOptions{ ID: "your-workflow-id", TaskQueue: "your-task-queue", StaticSummary: "Order processing for customer #12345", StaticDetails: "Processing premium order with expedited shipping", } // Start the workflow we, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflow, "workflow input") if err != nil { // Handle error } } ``` `StaticSummary` is a single-line description that appears in the Workflow list view, limited to 200 bytes. `StaticDetails` can be multi-line and provides more comprehensive information that appears in the Workflow details view, with a larger limit of 20K bytes. The input format is standard Markdown excluding images, HTML, and scripts. ### Inside the Workflow Within a Workflow, you can get and set the _current workflow details_. Unlike static summary/details set at Workflow start, this value can be updated throughout the life of the Workflow. Current Workflow details also takes Markdown format (excluding images, HTML, and scripts) and can span multiple lines. ```go "go.temporal.io/sdk/workflow" ) func YourWorkflow(ctx workflow.Context, input string) (string, error) { // Get the current details currentDetails := workflow.GetCurrentDetails(ctx) workflow.GetLogger(ctx).Info("Current details", "details", currentDetails) // Set/update the current details workflow.SetCurrentDetails(ctx, "Updated workflow details with new status") return "Workflow completed", nil } ``` ### Adding Summary to Activities and Timers You can attach a metadata parameter `Summary` to Activities when starting them from within a Workflow: ```go "time" "go.temporal.io/sdk/workflow" ) func YourWorkflow(ctx workflow.Context, input string) (string, error) { // Activity options with summary ao := workflow.ActivityOptions{ StartToCloseTimeout: 10 * time.Second, Summary: "Processing user data", } ctx = workflow.WithActivityOptions(ctx, ao) // Execute the activity var result string err := workflow.ExecuteActivity(ctx, YourActivity, input).Get(ctx, &result) if err != nil { return "", err } return result, nil } ``` Similarly, you can attach a `Summary` to timers within a Workflow: ```go "time" "go.temporal.io/sdk/workflow" ) func YourWorkflow(ctx workflow.Context, input string) (string, error) { // Create a timer with options including summary timerFuture := workflow.NewTimerWithOptions(ctx, 5*time.Minute, workflow.TimerOptions{ Summary: "Waiting for payment confirmation", }) // Wait for the timer err := timerFuture.Get(ctx, nil) if err != nil { return "", err } return "Timer completed", nil } ``` The input format for `Summary` is a string, and limited to 200 bytes. ## Viewing Summary and Details in the UI Once you've added summaries and details to your workflows, activities, and timers, you can view this enriched information in the Temporal Web UI. Navigate to your Workflow's details page to see the metadata displayed in two key locations: ### Workflow Overview Section At the top of the workflow details page, you'll find the workflow-level metadata: - **Summary & Details** - Displays the static summary and static details set when starting the workflow - **Current Details** - Displays the dynamic details that can be updated during workflow execution All Workflow details support standard Markdown formatting (excluding images, HTML, and scripts), allowing you to create rich, structured information displays. ### Event History Individual events in the Workflow's Event History display their associated summaries when available. Workflow, Activity and Timer summaries appear in purple text next to their corresponding events, providing immediate context without requiring you to expand the event details. When you do expand an event, the summary is also prominently displayed in the detailed view. --- ## Error handling - Go SDK An Activity, or a Child Workflow, might fail, and you could handle errors differently based on the different error cases. If the Activity returns an error as `errors.New()` or `fmt.Errorf()`, that error is converted into `*temporal.ApplicationError`. If the Activity returns an error as `temporal.NewNonRetryableApplicationError("error message", details)`, that error is returned as `*temporal.ApplicationError`. There are other types of errors such as `*temporal.TimeoutError`, `*temporal.CanceledError` and `*temporal.PanicError`. Following is an example of what your error code might look like: Here's an example of handling Activity errors within Workflow code that differentiates between different error types. ```go err := workflow.ExecuteActivity(ctx, YourActivity, ...).Get(ctx, nil) if err != nil { var applicationErr *ApplicationError if errors.As(err, &applicationErr) { // retrieve error message fmt.Println(applicationError.Error()) // handle Activity errors (created via NewApplicationError() API) var detailMsg string // assuming Activity return error by NewApplicationError("message", true, "string details") applicationErr.Details(&detailMsg) // extract strong typed details // handle Activity errors (errors created other than using NewApplicationError() API) switch applicationErr.Type() { case "CustomErrTypeA": // handle CustomErrTypeA case CustomErrTypeB: // handle CustomErrTypeB default: // newer version of Activity could return new errors that Workflow was not aware of. } } var canceledErr *CanceledError if errors.As(err, &canceledErr) { // handle cancellation } var timeoutErr *TimeoutError if errors.As(err, &timeoutErr) { // handle timeout, could check timeout type by timeoutErr.TimeoutType() switch err.TimeoutType() { case commonpb.ScheduleToStart: // Handle ScheduleToStart timeout. case commonpb.StartToClose: // Handle StartToClose timeout. case commonpb.Heartbeat: // Handle heartbeat timeout. default: } } var panicErr *PanicError if errors.As(err, &panicErr) { // handle panic, message and call stack are available by panicErr.Error() and panicErr.StackTrace() } } ``` --- ## Failure detection - Go SDK This page shows how to do the following: - [Set Workflow timeouts](#workflow-timeouts) - [Set a Workflow Retry Policy](#workflow-retries) - [Set Activity timeouts](#activity-timeouts) - [Set a custom Activity Retry Policy](#activity-retries) ## Workflow timeouts {#workflow-timeouts} **How to set Workflow timeouts using the Temporal Go SDK** Each Workflow timeout controls the maximum duration of a different aspect of a Workflow Execution. Workflow timeouts are set when [starting the Workflow Execution](#workflow-timeouts). Before we continue, we want to note that we generally do not recommend setting Workflow Timeouts, because Workflows are designed to be long-running and resilient. Instead, setting a Timeout can limit its ability to handle unexpected delays or long-running processes. If you need to perform an action inside your Workflow after a specific period of time, we recommend using a Timer. - **[Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout)** - restricts the maximum amount of time that a single Workflow Execution can be executed. - **[Workflow Run Timeout](/encyclopedia/detecting-workflow-failures#workflow-run-timeout):** restricts the maximum amount of time that a single Workflow Run can last. - **[Workflow Task Timeout](/encyclopedia/detecting-workflow-failures#workflow-task-timeout):** restricts the maximum amount of time that a Worker can execute a Workflow Task. Create an instance of [`StartWorkflowOptions`](https://pkg.go.dev/go.temporal.io/sdk/client#StartWorkflowOptions) from the `go.temporal.io/sdk/client` package, set a timeout, and pass the instance to the `ExecuteWorkflow` call. Available timeouts are: - `WorkflowExecutionTimeout` - `WorkflowRunTimeout` - `WorkflowTaskTimeout` ```go workflowOptions := client.StartWorkflowOptions{ // ... // Set Workflow Timeout duration WorkflowExecutionTimeout: 24 * 365 * 10 * time.Hour, // WorkflowRunTimeout: 24 * 365 * 10 * time.Hour, // WorkflowTaskTimeout: 10 * time.Second, // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` ### Workflow Retry Policy {#workflow-retries} **How to set a Workflow Retry policy using the Go SDK.** A Retry Policy can work in cooperation with the timeouts to provide fine controls to optimize the execution experience. Use a [Retry Policy](/encyclopedia/retry-policies) to retry a Workflow Execution in the event of a failure. Workflow Executions do not retry by default, and Retry Policies should be used with Workflow Executions only in certain situations. Create an instance of a [`RetryPolicy`](https://pkg.go.dev/go.temporal.io/sdk/temporal#RetryPolicy) from the `go.temporal.io/sdk/temporal` package and provide it as the value to the `RetryPolicy` field of the instance of `StartWorkflowOptions`. - Type: [`RetryPolicy`](https://pkg.go.dev/go.temporal.io/sdk/temporal#RetryPolicy) - Default: None ```go retrypolicy := &temporal.RetryPolicy{ InitialInterval: time.Second, BackoffCoefficient: 2.0, MaximumInterval: time.Second * 100, } workflowOptions := client.StartWorkflowOptions{ RetryPolicy: retrypolicy, // ... } workflowRun, err := temporalClient.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` ## How to set Activity timeouts {#activity-timeouts} **How to set Activity timeouts using the Go SDK.** Each Activity timeout controls the maximum duration of a different aspect of an Activity Execution. The following timeouts are available in the Activity Options. - **[Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout):** is the maximum amount of time allowed for the overall [Activity Execution](/activity-execution). - **[Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout):** is the maximum time allowed for a single [Activity Task Execution](/tasks#activity-task-execution). - **[Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout):** is the maximum amount of time that is allowed from when an [Activity Task](/tasks#activity-task) is scheduled to when a [Worker](/workers#worker) starts that Activity Task. An Activity Execution must have either the Start-To-Close or the Schedule-To-Close Timeout set. To set an Activity Timeout in Go, create an instance of `ActivityOptions` from the `go.temporal.io/sdk/workflow` package, set the Activity Timeout field, and then use the `WithActivityOptions()` API to apply the options to the instance of `workflow.Context`. Available timeouts are: - `StartToCloseTimeout` - `ScheduleToClose` - `ScheduleToStartTimeout` ```go activityoptions := workflow.ActivityOptions{ // Set Activity Timeout duration ScheduleToCloseTimeout: 10 * time.Second, // StartToCloseTimeout: 10 * time.Second, // ScheduleToStartTimeout: 10 * time.Second, } ctx = workflow.WithActivityOptions(ctx, activityoptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` ### Set a custom Activity Retry Policy {#activity-retries} **How to set a custom Activity Retry Policy using the Go SDK.** A Retry Policy works in cooperation with the timeouts to provide fine controls to optimize the execution experience. Activity Executions are automatically associated with a default [Retry Policy](/encyclopedia/retry-policies) if a custom one is not provided. To set a [RetryPolicy](/encyclopedia/retry-policies), create an instance of `ActivityOptions` from the `go.temporal.io/sdk/workflow` package, set the `RetryPolicy` field, and then use the `WithActivityOptions()` API to apply the options to the instance of `workflow.Context`. - Type: [`RetryPolicy`](https://pkg.go.dev/go.temporal.io/sdk/temporal#RetryPolicy) - Default: ```go retrypolicy := &temporal.RetryPolicy{ InitialInterval: time.Second, BackoffCoefficient: 2.0, MaximumInterval: time.Second * 100, // 100 * InitialInterval MaximumAttempts: 0, // Unlimited NonRetryableErrorTypes: []string, // empty } ``` Providing a Retry Policy here is a customization, and overwrites individual Field defaults. ```go retrypolicy := &temporal.RetryPolicy{ InitialInterval: time.Second, BackoffCoefficient: 2.0, MaximumInterval: time.Second * 100, } activityoptions := workflow.ActivityOptions{ RetryPolicy: retrypolicy, } ctx = workflow.WithActivityOptions(ctx, activityoptions) var yourActivityResult YourActivityResult err = workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) if err != nil { // ... } ``` ### Overriding the retry interval with Next Retry Delay {#next-retry-delay} You may return an [Application Failure](/references/failures#application-failure) with the `NextRetryDelay` field set. This value will replace and override whatever the Retry interval would be on the Retry Policy. For example, if in an Activity, you want to base the interval on the number of attempts: ```go attempt := activity.GetInfo(ctx).Attempt; return temporal.NewApplicationErrorWithOptions(fmt.Sprintf("Something bad happened on attempt %d", attempt), "NextDelay", temporal.ApplicationErrorOptions{ NextRetryDelay: 3 * time.Second * delay, }) ``` ## Activity Heartbeats {#activity-heartbeats} **How to Heartbeat an Activity using the Go SDK.** An [Activity Heartbeat](/encyclopedia/detecting-activity-failures#activity-heartbeat) is a ping from the [Worker Process](/workers#worker-process) that is executing the Activity to the [Temporal Service](/temporal-service). Each Heartbeat informs the Temporal Service that the [Activity Execution](/activity-execution) is making progress and the Worker has not crashed. If the Temporal Service does not receive a Heartbeat within a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) time period, the Activity will be considered failed and another [Activity Task Execution](/tasks#activity-task-execution) may be scheduled according to the Retry Policy. Heartbeats may not always be sent to the Temporal Service—they may be [throttled](/encyclopedia/detecting-activity-failures#throttling) by the Worker. Activity Cancellations are delivered to Activities from the Temporal Service when they Heartbeat. Activities that don't Heartbeat can't receive a Cancellation. Heartbeat throttling may lead to Cancellation getting delivered later than expected. Heartbeats can contain a `details` field describing the Activity's current progress. If an Activity gets retried, the Activity can access the `details` from the last Heartbeat that was sent to the Temporal Service. To [Heartbeat](/encyclopedia/detecting-activity-failures#activity-heartbeat) in an Activity in Go, use the `RecordHeartbeat` API. ```go // ... "go.temporal.io/sdk/workflow" // ... ) func YourActivityDefinition(ctx, YourActivityDefinitionParam) (YourActivityDefinitionResult, error) { // ... activity.RecordHeartbeat(ctx, details) // ... } ``` When an Activity Task Execution times out due to a missed Heartbeat, the last value of the `details` variable above is returned to the calling Workflow in the `details` field of `TimeoutError` with `TimeoutType` set to `Heartbeat`. You can also Heartbeat an Activity from an external source: ```go // The client is a heavyweight object that should be created once per process. temporalClient, err := client.Dial(client.Options{}) // Record heartbeat. err := temporalClient.RecordActivityHeartbeat(ctx, taskToken, details) ``` The parameters of the `RecordActivityHeartbeat` function are: - `taskToken`: The value of the binary `TaskToken` field of the `ActivityInfo` struct retrieved inside the Activity. - `details`: The serializable payload containing progress information. If an Activity Execution Heartbeats its progress before it failed, the retry attempt will have access to the progress information, so that the Activity Execution can resume from the failed state. Here's an example of how this can be implemented: ```go func SampleActivity(ctx context.Context, inputArg InputParams) error { startIdx := inputArg.StartIndex if activity.HasHeartbeatDetails(ctx) { // Recover from finished progress. var finishedIndex int if err := activity.GetHeartbeatDetails(ctx, &finishedIndex); err == nil { startIdx = finishedIndex + 1 // Start from next one. } } // Normal Activity logic... for i:=startIdx; i fields = 1; } ``` `Client` leverages headers to pass around additional context information. [HeaderReader](https://pkg.go.dev/go.temporal.io/sdk/internal#HeaderReader) and [HeaderWriter](https://pkg.go.dev/go.temporal.io/sdk/internal#HeaderWriter) are interfaces that allow reading and writing to the Temporal Server headers. The SDK includes [implementations](https://github.com/temporalio/sdk-go/blob/master/internal/headers.go) for these interfaces. `HeaderWriter` sets a value for a header. Headers are held as a map, so setting a value for the same key will overwrite its previous value. `HeaderReader` gets a value of a header. It also can iterate through all headers and execute the provided handler function on each header, so that your code can operate on select headers you need. ```go type HeaderWriter interface { Set(string, *commonpb.Payload) } type HeaderReader interface { Get(string) (*commonpb.Payload, bool) ForEachKey(handler func(string, *commonpb.Payload) error) error } ``` #### Context Propagators You can propagate additional context through Workflow Execution by using a context propagator. A context propagator needs to implement the `ContextPropagator` interface that includes the following four methods: ```go type ContextPropagator interface { Inject(context.Context, HeaderWriter) error Extract(context.Context, HeaderReader) (context.Context, error) InjectFromWorkflow(Context, HeaderWriter) error ExtractToWorkflow(Context, HeaderReader) (Context, error) } ``` - `Inject` reads select context keys from a Go [context.Context](https://golang.org/pkg/context/#Context) object and writes them into the headers using the [HeaderWriter](https://pkg.go.dev/go.temporal.io/sdk/internal#HeaderWriter) interface. - `InjectFromWorkflow` operates similar to `Inject` but reads from a [workflow.Context](https://pkg.go.dev/go.temporal.io/sdk/internal#Context) object. - `Extract` picks select headers and put their values into the [context.Context](https://golang.org/pkg/context/#Context) object. - `ExtractToWorkflow` operates similar to `Extract` but write to a [workflow.Context](https://pkg.go.dev/go.temporal.io/sdk/internal#Context) object. The [tracing context propagator](https://github.com/temporalio/samples-go/tree/main/ctxpropagation) shows a sample implementation of a context propagator. #### Is there a complete example? The [context propagation sample](https://github.com/temporalio/samples-go/blob/master/ctxpropagation/) configures a custom context propagator and shows context propagation of custom keys across a Workflow and an Activity. It also uses Jaeger for tracing. #### Can I configure multiple context propagators? Yes. Multiple context propagators help to structure code with each propagator having its own scope of responsibility. ### Context Propagation Over Nexus Operation Calls Nexus does not use the standard context propagator header structure. Instead, it relies on a Temporal-agnostic protocol designed to connect arbitrary systems. To propagate context over Nexus Operation calls, the context is serialized into a `nexus.Header`. This is essentially a wrapper around `map[string]string` with helper methods to `Set` and `Get` values. The header normalizes all keys to lowercase. Because Nexus uses this custom format, and because Nexus calls may involve external systems, the `ContextPropagator` interface doesn’t apply to Nexus headers. Context must be explicitly propagated through interceptors, as shown in the [Nexus Context Propagation sample](https://github.com/temporalio/samples-go/tree/main/nexus-context-propagation). ### Useful Resources - [Passing Context with Temporal](https://spiralscout.com/blog/passing-context-with-temporal) by SpiralScout The [Go SDK](https://github.com/temporalio/sdk-go) provides support for distributed tracing with **_Interceptors_**. Interceptors uses Temporal headers to create a call graph of a [Workflow](/workflows), along with its [Activities](/activities) and [Child Workflows](/child-workflows). There are several tracing implementations supported by the Temporal Go SDK. For an [OpenTracing](https://pkg.go.dev/go.temporal.io/sdk/contrib/opentracing) Interceptor, use `opentracing.NewInterceptor(opentracing.TracerOptions{})` to create a `TracingInterceptor`. ```go // create Interceptor tracingInterceptor, err := opentracing.NewInterceptor(opentracing.TracerOptions{}) ``` For an [OpenTelemetry](https://pkg.go.dev/go.temporal.io/sdk/contrib/opentelemetry) Interceptor, use `opentelemetry.NewTracingInterceptor(opentelemetry.TracerOptions{})`. ```go // create Interceptor tracingInterceptor, err := opentelemetry.NewTracingInterceptor(opentelemetry.TracerOptions{}) ``` For a [Datadog](https://pkg.go.dev/go.temporal.io/sdk/contrib/datadog/tracing) Interceptor, use `tracing.NewTracingInterceptor(tracing.TracerOptions{})`. ```go // create Interceptor tracingInterceptor, err := tracing.NewTracingInterceptor(tracing.TracerOptions{}) ``` Pass the newly created Interceptor to [ClientOptions](https://pkg.go.dev/go.temporal.io/sdk/internal#ClientOptions) to enable tracing. ```go c, err := client.Dial(client.Options{ Interceptors: []interceptor.ClientInterceptor{tracingInterceptor}, }) ``` OpenTracing and OpenTelemetry are natively supported by [Jaeger](https://www.jaegertracing.io/docs/1.46/features/#native-support-for-opentracing-and-opentelemetry). For more information on configuring and using tracing, see the documentation provided by [OpenTracing](https://opentracing.io), [OpenTelemetry](https://opentelemetry.io/), and [Datadog](https://docs.datadoghq.com/tracing/). ## Log from a Workflow {#logging} **How to log from a Workflow using the Go SDK.** Send logs and errors to a logging service, so that when things go wrong, you can see what happened. Loggers create an audit trail and capture information about your Workflow's operation. An appropriate logging level depends on your specific needs. During development or troubleshooting, you might use debug or even trace. In production, you might use info or warn to avoid excessive log volume. The logger supports the following logging levels: | Level | Use | | ------- | --------------------------------------------------------------------------------------------------------- | | `TRACE` | The most detailed level of logging, used for very fine-grained information. | | `DEBUG` | Detailed information, typically useful for debugging purposes. | | `INFO` | General information about the application's operation. | | `WARN` | Indicates potentially harmful situations or minor issues that don't prevent the application from working. | | `ERROR` | Indicates error conditions that might still allow the application to continue running. | The Temporal SDK core normally uses `WARN` as its default logging level. In Workflow Definitions you can use [`workflow.GetLogger(ctx)`](https://pkg.go.dev/go.temporal.io/sdk/workflow#GetLogger) to write logs. ```go "context" "time" "go.temporal.io/sdk/activity" "go.temporal.io/sdk/workflow" ) // Workflow is a standard workflow definition. // Note that the Workflow and Activity don't need to care that // their inputs/results are being compressed. func Workflow(ctx workflow.Context, name string) (string, error) { // ... workflow.WithActivityOptions(ctx, ao) // Getting the logger from the context. logger := workflow.GetLogger(ctx) // Logging a message with the key value pair `name` and `name` logger.Info("Compressed Payloads workflow started", "name", name) info := map[string]string{ "name": name, } logger.Info("Compressed Payloads workflow completed.", "result", result) return result, nil } ``` ### Provide a custom logger {#custom-logger} **How to provide a custom logger to the Temporal Client using the Go SDK.** This field sets a custom Logger that is used for all logging actions of the instance of the Temporal Client. Although the Go SDK does not support most third-party logging solutions natively, [our friends at Banzai Cloud](https://github.com/sagikazarmark) built the adapter package [logur](https://github.com/logur/logur) which makes it possible to use third party loggers with minimal overhead. Most of the popular logging solutions have existing adapters in Logur, but you can find a full list [in the Logur Github project](https://github.com/logur?q=adapter-). Here is an example of using Logur to support [Logrus](https://github.com/sirupsen/logrus): ```go package main "go.temporal.io/sdk/client" "github.com/sirupsen/logrus" logrusadapter "logur.dev/adapter/logrus" "logur.dev/logur" ) func main() { // ... logger := logur.LoggerToKV(logrusadapter.New(logrus.New())) clientOptions := client.Options{ Logger: logger, } temporalClient, err := client.Dial(clientOptions) // ... } ``` ## Visibility APIs {#visibility} The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Temporal Service. ### Search Attributes {#search-attributes} **How to use Search Attributes using the Go SDK.** The typical method of retrieving a Workflow Execution is by its Workflow Id. However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments. You can do this with [Search Attributes](/search-attribute). - [Default Search Attributes](/search-attribute#default-search-attribute) like `WorkflowType`, `StartTime` and `ExecutionStatus` are automatically added to Workflow Executions. - _Custom Search Attributes_ can contain their own domain-specific data (like `customerId` or `numItems`). - A few [generic Custom Search Attributes](/search-attribute#custom-search-attribute) like `CustomKeywordField` and `CustomIntField` are created by default in Temporal's [Docker Compose](https://github.com/temporalio/docker-compose). The steps to using custom Search Attributes are: - Create a new Search Attribute in your Temporal Service using `temporal operator search-attribute create` or the Cloud UI. - Set the value of the Search Attribute for a Workflow Execution: - On the Client by including it as an option when starting the Execution. - In the Workflow by calling `UpsertSearchAttributes`. - Read the value of the Search Attribute: - On the Client by calling `DescribeWorkflow`. - In the Workflow by looking at `WorkflowInfo`. - Query Workflow Executions by the Search Attribute using a [List Filter](/list-filter): - [In the Temporal CLI](/cli/workflow#list). - In code by calling `ListWorkflowExecutions`. Here is how to query Workflow Executions: The [ListWorkflow()](https://pkg.go.dev/go.temporal.io/sdk/client#Client.ListWorkflow) function retrieves a list of [Workflow Executions](/workflow-execution) that match the [Search Attributes](/search-attribute) of a given [List Filter](/list-filter). The metadata returned from the [Visibility](/temporal-service/visibility) store can be used to get a Workflow Execution's history and details from the [Persistence](/temporal-service/persistence) store. Use a List Filter to define a `request` to pass into `ListWorkflow()`. ```go request := &workflowservice.ListWorkflowExecutionsRequest{ Query: "CloseTime = missing" } ``` This `request` value returns only open Workflows. For more List Filter examples, see the [examples provided for List Filters in the Temporal Visibility guide.](/list-filter#list-filter-examples) ```go resp, err := temporalClient.ListWorkflow(ctx.Background(), request) if err != nil { return err } fmt.Println("First page of results:") for _, exec := range resp.Executions { fmt.Printf("Workflow ID %v\n", exec.Execution.WorkflowId) } ``` ### Set custom Search Attributes {#custom-search-attributes} **How to set custom Search Attributes using the Go SDK.** After you've created custom Search Attributes in your Temporal Service (using the `temporal operator search-attribute create` command or the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow. Provide key-value pairs in [`StartWorkflowOptions.SearchAttributes`](https://pkg.go.dev/go.temporal.io/sdk/internal#StartWorkflowOptions). Search Attributes are represented as `map[string]interface{}`. The values in the map must correspond to the [Search Attribute's value type](/search-attribute#supported-types): - Bool = `bool` - Datetime = `time.Time` - Double = `float64` - Int = `int64` - Keyword = `string` - Text = `string` If you had custom Search Attributes `CustomerId` of type Keyword and `MiscData` of type Text, you would provide `string` values: ```go func (c *Client) CallYourWorkflow(ctx context.Context, workflowID string, payload map[string]interface{}) error { // ... searchAttributes := map[string]interface{}{ "CustomerId": payload["customer"], "MiscData": payload["miscData"] } options := client.StartWorkflowOptions{ SearchAttributes: searchAttributes // ... } we, err := c.Client.ExecuteWorkflow(ctx, options, app.YourWorkflow, payload) // ... } ``` ### Upsert Search Attributes {#upsert-search-attributes} **How to upsert Search Attributes using the Go SDK.** You can upsert Search Attributes to add or update Search Attributes from within Workflow code. In advanced cases, you may want to dynamically update these attributes as the Workflow progresses. [UpsertSearchAttributes](https://pkg.go.dev/go.temporal.io/sdk/workflow#UpsertSearchAttributes) is used to add or update Search Attributes from within Workflow code. `UpsertSearchAttributes` will merge attributes to the existing map in the Workflow. Consider this example Workflow code: ```go func YourWorkflow(ctx workflow.Context, input string) error { attr1 := map[string]interface{}{ "CustomIntField": 1, "CustomBoolField": true, } workflow.UpsertSearchAttributes(ctx, attr1) attr2 := map[string]interface{}{ "CustomIntField": 2, "CustomKeywordField": "seattle", } workflow.UpsertSearchAttributes(ctx, attr2) } ``` After the second call to `UpsertSearchAttributes`, the map will contain: ```go map[string]interface{}{ "CustomIntField": 2, // last update wins "CustomBoolField": true, "CustomKeywordField": "seattle", } ``` ### Remove a Search Attribute from a Workflow {#remove-search-attribute} **How to remove a Search Attribute from a Workflow using the Go SDK.** To remove a Search Attribute that was previously set, set it to an empty array: `[]`. **There is no support for removing a field.** However, to achieve a similar effect, set the field to some placeholder value. For example, you could set `CustomKeywordField` to `impossibleVal`. Then searching `CustomKeywordField != 'impossibleVal'` will match Workflows with `CustomKeywordField` not equal to `impossibleVal`, which includes Workflows without the `CustomKeywordField` set. --- ## Schedules - Go SDK This page shows how to do the following: - [Scheduled Workflows](#schedule-a-workflow) - [Create a Schedule](#create-schedule) - [Backfill a Schedule](#backfill-schedule) - [Delete a Schedule](#delete-schedule) - [Describe a Schedule](#describe-schedule) - [List Schedules](#list-schedules) - [Pause a Schedule](#pause-schedule) - [Trigger a Schedule](#trigger-schedule) - [Update a Schedule](#update-schedule) - [Start delay](#start-delay) - [Temporal Cron Jobs](#temporal-cron-jobs) ## Scheduled Workflows {#schedule-a-workflow} Scheduling Workflows is a crucial aspect of any automation process, especially when dealing with time-sensitive tasks. By scheduling a Workflow, you can automate repetitive tasks, reduce the need for manual intervention, and ensure timely execution of your business processes Use any of the following action to help Schedule a Workflow Execution and take control over your automation process. ### Create a Schedule {#create-schedule} **How to create a Schedule for a Workflow using the Go SDK.** Schedules are initiated with the `create` call. The user generates a unique Schedule ID for each new Schedule. To create a Schedule in Go, use `Create()` on the [Client](/encyclopedia/temporal-sdks#temporal-client). Schedules must be initialized with a Schedule ID, [Spec](/schedule), and [Action](/schedule) in `client.ScheduleOptions{}`. View the source code {' '} in the context of the rest of the application code. ```go func main() { // ... scheduleID := "schedule_id" workflowID := "schedule_workflow_id" // Create the schedule. scheduleHandle, err := temporalClient.ScheduleClient().Create(ctx, client.ScheduleOptions{ ID: scheduleID, Spec: client.ScheduleSpec{}, Action: &client.ScheduleWorkflowAction{ ID: workflowID, Workflow: schedule.ScheduleWorkflow, TaskQueue: "schedule", }, }) // ... } // ... ``` :::tip Schedule Auto-Deletion Once a Schedule has completed creating all its Workflow Executions, the Temporal Service deletes it since it won’t fire again. The Temporal Service doesn't guarantee when this removal will happen. ::: ### Backfill a Schedule {#backfill-schedule} **How to Backfill a Schedule for a Workflow using the Go SDK.** Backfilling a Schedule executes [Workflow Tasks](/tasks#workflow-task) ahead of the Schedule's specified time range. This is useful for executing a missed or delayed Action, or for testing the Workflow ahead of time. To backfill a Schedule in Go, use `Backfill()` on `ScheduleHandle`. Specify the start and end times to execute the Workflow, along with the overlap policy. View the source code {' '} in the context of the rest of the application code. ```go func main() { // ... err = scheduleHandle.Backfill(ctx, client.ScheduleBackfillOptions{ Backfill: []client.ScheduleBackfill{ { Start: now.Add(-4 * time.Minute), End: now.Add(-2 * time.Minute), Overlap: enums.SCHEDULE_OVERLAP_POLICY_ALLOW_ALL, }, { Start: now.Add(-2 * time.Minute), End: now, Overlap: enums.SCHEDULE_OVERLAP_POLICY_ALLOW_ALL, }, }, }) if err != nil { log.Fatalln("Unable to Backfill Schedule", err) } // ... } // ... ``` ### Delete a Schedule {#delete-schedule} **How to delete a Schedule for a Workflow using the Go SDK.** Deleting a Schedule erases a Schedule. Deletion does not affect any Workflows started by the Schedule. To delete a Schedule, use `Delete()` on the `ScheduleHandle`. View the source code {' '} in the context of the rest of the application code. ```go func main() { // ... defer func() { log.Println("Deleting schedule", "ScheduleID", scheduleHandle.GetID()) err = scheduleHandle.Delete(ctx) if err != nil { log.Fatalln("Unable to delete schedule", err) } }() // ... ``` ### Describe a Schedule {#describe-schedule} **How to describe a Schedule for a Workflow using the Go SDK.** `Describe` retrieves information about the current Schedule configuration. This can include details about the Schedule Spec (such as Intervals), CronExpressions, and Schedule State. To describe a Schedule, use `Describe()` on the ScheduleHandle. View the source code {' '} in the context of the rest of the application code. ```go func main() { // ... scheduleHandle.Describe(ctx) // ... ``` ### List Schedules {#list-schedules} **How to list all Schedules for Workflows using the Go SDK.** The `List` action returns all available Schedules and their respective Schedule IDs. To return information on all Schedules, use `ScheduleClient.List()`. View the source code {' '} in the context of the rest of the application code. ```go func main() { // ... listView, _ := temporalClient.ScheduleClient().List(ctx, client.ScheduleListOptions{ PageSize: 1, }) for listView.HasNext() { log.Println(listView.Next()) } // ... ``` ### Pause a Schedule {#pause-schedule} **How to pause and unpause a Schedule for a Workflow using the Go SDK.** `Pause` and `Unpause` enable the start or stop of all future Workflow Runs on a given Schedule. Pausing a Schedule halts all future Workflow Runs. Pausing can be enabled by setting `State.Paused` to `true`, or by using `Pause()` on the ScheduleHandle. Unpausing a Schedule allows the Workflow to execute as planned. To unpause a Schedule, use `Unpause()` on `ScheduleHandle`. View the source code {' '} in the context of the rest of the application code. ```go func main() { // ... err = scheduleHandle.Pause(ctx, client.SchedulePauseOptions{ Note: "The Schedule has been paused.", }) // ... err = scheduleHandle.Unpause(ctx, client.ScheduleUnpauseOptions{ Note: "The Schedule has been unpaused.", }) ``` ### Trigger a Schedule {#trigger-schedule} **How to trigger a Schedule for a Workflow using the Go SDK.** Triggering a Schedule immediately executes an Action defined in that Schedule. By default, `trigger` is subject to the Overlap Policy. To trigger a Scheduled Workflow Execution, use `trigger()` on `ScheduleHandle`. View the source code {' '} in the context of the rest of the application code. ```go func main() { // ... for i := 0; i < 5; i++ { scheduleHandle.Trigger(ctx, client.ScheduleTriggerOptions{ Overlap: enums.SCHEDULE_OVERLAP_POLICY_ALLOW_ALL, }) time.Sleep(2 * time.Second) } // ... ``` ### Update a Schedule {#update-schedule} **How to update a Schedule for a Workflow using the Go SDK.** Updating a Schedule changes the configuration of an existing Schedule. These changes can be made to Workflow Actions, Action parameters, Memos, and the Workflow's Cancellation Policy. Use `Update()` on the ScheduleHandle to modify a Schedule. View the source code {' '} in the context of the rest of the application code. ```go func main() { // ... updateSchedule := func(input client.ScheduleUpdateInput) (*client.ScheduleUpdate, error) { return &client.ScheduleUpdate{ Schedule: &input.Description.Schedule, }, nil } _ = scheduleHandle.Update(ctx, client.ScheduleUpdateOptions{ DoUpdate: updateSchedule, }) } // ... ``` ## Start Delay {#start-delay} **How to delay the start of a Workflow Execution using Start Delay with the Temporal Go SDK.** Use `StartDelay` to schedule a Workflow Execution at a specific one-time future point rather than on a recurring schedule. Create an instance of [`StartWorkflowOptions`](https://pkg.go.dev/go.temporal.io/sdk/client#StartWorkflowOptions) from the `go.temporal.io/sdk/client` package, set the `StartDelay` field, and pass the instance to the `ExecuteWorkflow` call. ```go workflowOptions := client.StartWorkflowOptions{ // ... // Start the workflow in 12 hours StartDelay: time.Hours * 12, // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` ## Temporal Cron Jobs {#temporal-cron-jobs} **How to start a Workflow Execution as a Temporal Cron Job using the Go SDK.** :::caution Cron support is not recommended We recommend using [Schedules](https://docs.temporal.io/schedule) instead of Cron Jobs. Schedules were built to provide a better developer experience, including more configuration options and the ability to update or pause running Schedules. ::: A [Temporal Cron Job](/cron-job) is the series of Workflow Executions that occur when a Cron Schedule is provided in the call to spawn a Workflow Execution. A Cron Schedule is provided as an option when the call to spawn a Workflow Execution is made. Create an instance of [`StartWorkflowOptions`](https://pkg.go.dev/go.temporal.io/sdk/client#StartWorkflowOptions) from the `go.temporal.io/sdk/client` package, set the `CronSchedule` field, and pass the instance to the `ExecuteWorkflow` call. - Type: `string` - Default: None ```go workflowOptions := client.StartWorkflowOptions{ CronSchedule: "15 8 * * *", // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` Temporal Workflow Schedule Cron strings follow this format: ``` ┌───────────── minute (0 - 59) │ ┌───────────── hour (0 - 23) │ │ ┌───────────── day of the month (1 - 31) │ │ │ ┌───────────── month (1 - 12) │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday) │ │ │ │ │ * * * * * ``` --- ## Selectors - Go SDK This page shows how to do the following: - [Use Selectors with Futures](#use-selectors-with-futures) - [Use Selectors with Timers](#use-selectors-with-timers) - [Use Selectors with Channels](#use-selectors-with-channels) In Go, the `select` statement lets a goroutine wait on multiple communication operations. A `select` **blocks until one of its cases can run**, then it executes that case. It chooses one at random if multiple are ready. However, a normal Go select statement can not be used inside of Workflows directly because of the random nature. Temporal's Go SDK `Selector`s are similar and act as a replacement. They can block on sending and receiving from Channels but as a bonus can listen on Future deferred work. Usage of Selectors to defer and process work (in place of Go's `select`) are necessary in order to ensure deterministic Workflow code execution (though using `select` in Activity code is fine). ## Full API example {#api-example} The API is sufficiently different from `select` that it bears documenting: ```go func SampleWorkflow(ctx workflow.Context) error { // standard Workflow setup code omitted... // API Example: declare a new selector selector := workflow.NewSelector(ctx) // API Example: defer code execution until the Future that represents Activity result is ready work := workflow.ExecuteActivity(ctx, ExampleActivity) selector.AddFuture(work, func(f workflow.Future) { // deferred code omitted... }) // more parallel timers and activities initiated... // API Example: receive information from a Channel var signalVal string channel := workflow.GetSignalChannel(ctx, channelName) selector.AddReceive(channel, func(c workflow.ReceiveChannel, more bool) { // matching on the channel doesn't consume the message. // So it has to be explicitly consumed here c.Receive(ctx, &signalVal) // do something with received information }) // API Example: block until the next Future is ready to run // important! none of the deferred code runs until you call selector.Select selector.Select(ctx) // Todo: document selector.HasPending } ``` ## Use Selectors with Futures You usually add `Future`s after `Activities`: ```go // API Example: defer code execution until after an activity is done work := workflow.ExecuteActivity(ctx, ExampleActivity) selector.AddFuture(work, func(f workflow.Future) { // deferred code omitted... }) ``` `selector.Select(ctx)` is the primary mechanism which blocks on and executes `Future` work. It is intentionally flexible; you may call it conditionally or multiple times: ```go // API Example: blocking conditionally if somecondition != nil { selector.Select(ctx) } // API Example: popping off all remaining Futures for i := 0; i < len(someArray); i++ { selector.Select(ctx) // this will wait for one branch // you can interrupt execution here } ``` A Future matches only once per Selector instance even if Select is called multiple times. If multiple items are available, the order of matching is not defined. ## Use Selectors with Timers An important use case of futures is setting up a race between a timer and a pending activity, effectively adding a "soft" timeout that doesn't result in any errors or retries of that activity. For example, [the Timer sample](https://github.com/temporalio/samples-go/blob/master/timer) shows how you can write a long running order processing operation where: - if processing takes too long, we send out a notification email to user about the delay, but we won't cancel the operation - if the operation finishes before the timer fires, then we want to cancel the timer. ```go var processingDone bool f := workflow.ExecuteActivity(ctx, OrderProcessingActivity) selector.AddFuture(f, func(f workflow.Future) { processingDone = true // cancel timerFuture cancelHandler() }) // use timer future to send notification email if processing takes too long timerFuture := workflow.NewTimer(childCtx, processingTimeThreshold) selector.AddFuture(timerFuture, func(f workflow.Future) { if !processingDone { // processing is not done yet when timer fires, send notification email _ = workflow.ExecuteActivity(ctx, SendEmailActivity).Get(ctx, nil) } }) // wait the timer or the order processing to finish selector.Select(ctx) ``` We create timers with the `workflow.NewTimer` API. ## Use Selectors with Channels `selector.AddReceive(channel, func(c workflow.ReceiveChannel, more bool) {})` is the primary mechanism which receives messages from `Channels`. ```go // API Example: receive information from a Channel var signalVal string channel := workflow.GetSignalChannel(ctx, channelName) selector.AddReceive(channel, func(c workflow.ReceiveChannel, more bool) { c.Receive(ctx, &signalVal) // do something with received information }) ``` Merely matching on the channel doesn't consume the message; it has to be explicitly consumed with a `c.Receive(ctx, &signalVal)` call. ## Query Selector state You can use the `selector.HasPending` API to ensure that signals are not lost when a Workflow is closed (e.g. by `ContinueAsNew`). ## Learn more Usage of Selectors is best learned by example: - Setting up a race condition between an Activity and a Timer, and conditionally execute ([Timer example](https://github.com/temporalio/samples-go/blob/14980b3792cc3a8447318fefe9a73fe0a580d4b9/timer/workflow.go)) - Receiving information in a Channel ([Mutex example](https://github.com/temporalio/samples-go/blob/14980b3792cc3a8447318fefe9a73fe0a580d4b9/mutex/mutex_workflow.go)) - Looping through a list of work and scheduling them all in parallel ([DSL example](https://github.com/temporalio/samples-go/blob/14980b3792cc3a8447318fefe9a73fe0a580d4b9/dsl/workflow.go)) - Executing activities in parallel, pick the first result, cancel remainder ([Pick First example](https://github.com/temporalio/samples-go/blob/14980b3792cc3a8447318fefe9a73fe0a580d4b9/pickfirst/pickfirst_workflow.go)) --- ## Worker Sessions - Go SDK This page shows how to do the following: - [Enable Worker Sessions](#enable-sessions) - [Change the maximum concurrent Sessions of a Worker](#max-concurrent-sessions) - [Create a Worker Session](#create-a-session) :::tip Support, stability, and dependency info - This feature is currently available only in the Go SDK. ::: A Worker Session is a feature that provides a straightforward API for [Task Routing](/task-routing) to ensure that Activity Tasks are executed with the same Worker without requiring you to manually specify Task Queue names. ## Enable Worker Sessions {#enable-sessions} **How to enable Worker Sessions using the Go SDK.** Set `EnableSessionWorker` to `true` in the Worker options. View the source code {' '} in the context of the rest of the application code. ```go // ... func main() { // ... // Enable Sessions for this Worker. workerOptions := worker.Options{ EnableSessionWorker: true, // ... } w := worker.New(temporalClient, "fileprocessing", workerOptions) w.RegisterWorkflow(sessions.SomeFileProcessingWorkflow) w.RegisterActivity(&sessions.FileActivities{}) err = w.Run(worker.InterruptCh()) // ... } ``` ### Change the maximum concurrent Sessions of a Worker. {#max-concurrent-sessions} **How to change the maximum concurrent Sessions of a Worker using the Go SDK.** You can adjust the maximum concurrent Sessions of a Worker. To limit the number of concurrent Sessions running on a Worker, set the `MaxConcurrentSessionExecutionSize` field of `worker.Options` to the desired value. By default, this field is set to a very large value, so there's no need to manually set it if no limitation is needed. If a Worker hits this limitation, it won't accept any new `CreateSession()` requests until one of the existing sessions is completed. If the session can't be created within `CreationTimeout`, `CreateSession()` returns an error . View the source code {' '} in the context of the rest of the application code. ```go func main() { // ... workerOptions := worker.Options{ // ... // This configures the maximum allowed concurrent sessions. // Customize this value only if you need to. MaxConcurrentSessionExecutionSize: 1000, // ... } // ... ``` ## Create a Worker Session {#create-a-session} **How to create a Worker Session using the Go SDK.** Within the Workflow code use the Workflow APIs to create a Session with whichever Worker picks up the first Activity Task. Use the [`CreateSession`](https://pkg.go.dev/go.temporal.io/sdk/workflow#CreateSession) API to create a Context object that can be passed to calls to spawn Activity Executions. Pass an instance of `workflow.Context` and [`SessionOptions`](https://pkg.go.dev/go.temporal.io/sdk/workflow#SessionOptions) to the `CreateSession` API call and get a Session Context that contains metadata information of the Session. Use the Session Context to spawn all Activity Executions that should belong to the Session. All associated Activity Tasks are then processed by the same Worker Entity. When the `CreateSession` API is called, the Task Queue name that is specified in `ActivityOptions` (or in `StartWorkflowOptions` if the Task Queue name is not specified in `ActivityOptions`) is used, and a Session is created with one of the Workers polling that Task Queue. The Session Context is cancelled if the Worker executing this Session dies or `CompleteSession()` is called. When using the returned Session Context to spawn Activity Executions, a `workflow.ErrSessionFailed` error is returned if the Session framework detects that the Worker executing this Session has died. The failure of Activity Executions won't affect the state of the Session, so you still need to handle the errors returned from your Activities and call `CompleteSession()` if necessary. If the context passed in already contains an open Session, `CreateSession()` returns an error. If all the Workers are currently busy and unable to handle a new Session, the framework keeps retrying until the `CreationTimeout` period you specified in `SessionOptions` has passed before returning an error. (For more details, check the "Concurrent Session Limitation" section.) `CompleteSession()` releases the resources reserved on the Worker, so it's important to call it as soon as you no longer need the Session. It cancels the session context and therefore all the Activity Executions using that Session Context. It is safe to call `CompleteSession()` on a failed Session, meaning that you can call it from a `defer` function after the Session is successfully created. If the Worker goes down between Activities, any scheduled Activities meant for the Session Worker are canceled. If not, you get a `workflow.ErrSessionFailed` error when the next call of `workflow.ExecuteActivity()` is made from that Workflow. View the source code {' '} in the context of the rest of the application code. ```go package sessions "time" "go.temporal.io/sdk/workflow" ) // ... // SomeFileProcessingWorkflow is a Workflow Definition. func SomeFileProcessingWorkflow(ctx workflow.Context, param FileProcessingWFParam) error { activityOptions := workflow.ActivityOptions{ StartToCloseTimeout: time.Minute, } ctx = workflow.WithActivityOptions(ctx, activityOptions) // ... sessionOptions := &workflow.SessionOptions{ CreationTimeout: time.Minute, ExecutionTimeout: time.Minute, } // Create a Session with the Worker so that all Activities execute with the same Worker. sessionCtx, err := workflow.CreateSession(ctx, sessionOptions) if err != nil { return err } defer workflow.CompleteSession(sessionCtx) // ... err = workflow.ExecuteActivity(sessionCtx, a.DownloadFile, param).Get(sessionCtx, &downloadResult) // ... err = workflow.ExecuteActivity(sessionCtx, a.ProcessFile, processParam).Get(sessionCtx, &processResult) // ... err = workflow.ExecuteActivity(sessionCtx, a.UploadFile, uploadParam).Get(sessionCtx, nil) // ... } ``` ## Additional Session usage information {#session-metadata} ```go type SessionInfo struct { // A unique Id for the session SessionID string // The hostname of the worker that is executing the session HostName string // ... other unexported fields } func GetSessionInfo(ctx Context) *SessionInfo ``` The Session Context also stores some Session metadata, which can be retrieved by the `GetSessionInfo()` API. If the Context passed in doesn't contain any Session metadata, this API will return a `nil` pointer. ### Recreate Session For long-running Sessions, you may want to use the `ContinueAsNew` feature to split the Workflow into multiple runs when all Activities need to be executed by the same Worker. The `RecreateSession()` API is designed for such a use case. ```go func RecreateSession(ctx Context, recreateToken []byte, sessionOptions *SessionOptions) (Context, error) ``` Its usage is the same as `CreateSession()` except that it also takes in a `recreateToken`, which is needed to create a new Session on the same Worker as the previous one. You can get the token by calling the `GetRecreateToken()` method of the `SessionInfo` object. ```go token := workflow.GetSessionInfo(sessionCtx).GetRecreateToken() ``` **Is there a complete example?** Yes, the [file processing example](https://github.com/temporalio/samples-go/tree/master/fileprocessing) in the [temporalio/samples-go](https://github.com/temporalio/samples-go) repo has been updated to use the session framework. **What happens to my Activity if the Worker dies?** If your Activity has already been scheduled, it will be canceled. If not, you will get a `workflow.ErrSessionFailed` error when you call `workflow.ExecuteActivity()`. **Is the concurrent session limitation per process or per host?** It's per Worker Process, so make sure there's only one Worker Process running on the host if you plan to use this feature. **Future Work** - Right now, a Session is considered failed if the Worker Process dies. However, for some use cases, you may only care whether the Worker host is alive or not. For these use cases, the Session should be automatically re-established if the Worker Process is restarted. - The current implementation assumes that all Sessions are consuming the same type of resource and there's only one global limitation. Our plan is to allow you to specify what type of resource your Session will consume and enforce different limitations on different types of resources. --- ## Set up your local with the Go SDK --- # Quickstart Configure your local development environment to get started developing with Temporal. go version go version go1.18.1 darwin/amd64 }> ## Install Go Make sure you have Go installed. These tutorials were produced using Go 1.18. Check your version of Go with the following command: This will return your installed Go version. mkdir goproject cd goproject go mod init my-org/greeting go get go.temporal.io/sdk go get go.temporal.io/sdk/client go mod tidy }> ## Install the Temporal Go SDK If you are creating a new project using the Temporal Go SDK, you can start by creating a new directory. Next, switch to the new directory. Then, initialize a Go project in that directory. Finally, install the Temporal SDK with `go get`. Install the Temporal CLI using Homebrew: brew install temporal Download the Temporal CLI archive for your architecture: Windows amd64 Windows arm64 Extract it and add temporal.exe to your PATH. Download the Temporal CLI for your architecture: Linux amd64 Linux arm64 Extract the archive and move the temporal binary into your PATH, for example: sudo mv temporal /usr/local/bin }> ## Install Temporal CLI and start the development server The fastest way to get a development version of the Temporal Service running on your local machine is to use [Temporal CLI](https://docs.temporal.io/cli). Choose your operating system to install Temporal CLI: After installing, open a new Terminal window and start the development server: temporal server start-dev Change the Web UI port The Temporal Web UI may be on a different port in some examples or tutorials. To change the port for the Web UI, use the --ui-port option when starting the server: temporal server start-dev --ui-port 8080 The Temporal Web UI will now be available at http://localhost:8080. }> ## Start the development server Once you've installed Temporal CLI and added it to your PATH, open a new Terminal window and run the following command. This command starts a local Temporal Service. It starts the Web UI, creates the default Namespace, and uses an in-memory database. The Temporal Service will be available on localhost:7233. The Temporal Web UI will be available at http://localhost:8233. Leave the local Temporal Service running as you work through tutorials and other projects. You can stop the Temporal Service at any time by pressing CTRL+C. Once you have everything installed, you're ready to build apps with Temporal on your local machine. ## Run Hello World: Test Your Installation Now let's verify your setup is working by creating and running a complete Temporal application with both a Workflow and Activity. This test will confirm that: - The Temporal Go SDK is properly installed - Your local Temporal Service is running - You can successfully create and execute Workflows and Activities - The communication between components is functioning correctly ### 1. Create the Activity An Activity is a normal function or method that executes a single, well-defined action (either short- or long-running) that is typically prone to failure. Examples include any action that interacts with the outside world, such as sending emails, making network requests, writing to a database, or calling an API. If an Activity fails, Temporal automatically retries it based on your configuration. Create an Activity file (activity.go): ```go package greeting "context" "fmt" ) func Greet(ctx context.Context, name string) (string, error) { return fmt.Sprintf("Hello %s", name), nil } ``` ### 2. Create the Workflow Workflows orchestrate Activities and contain the application logic. Temporal Workflows are resilient. They can run—and keep running—for years, even if the underlying infrastructure fails. If the application itself crashes, Temporal will automatically recreate its pre-failure state so it can continue right where it left off. Create a Workflow file (workflow.go): ```go package greeting "time" "go.temporal.io/sdk/workflow" ) func SayHelloWorkflow(ctx workflow.Context, name string) (string, error) { ao := workflow.ActivityOptions{ StartToCloseTimeout: time.Second * 10, } ctx = workflow.WithActivityOptions(ctx, ao) var result string err := workflow.ExecuteActivity(ctx, Greet, name).Get(ctx, &result) if err != nil { return "", err } return result, nil } ``` ### 3. Create and Run the Worker With your Activity and Workflow defined, you need a Worker to execute them. A Worker polls a Task Queue, that you configure it to poll, looking for work to do. Once the Worker dequeues a Workflow or Activity task from the Task Queue, it then executes that task. Workers are a crucial part of your Temporal application as they're what actually execute the tasks defined in your Workflows and Activities. For more information on Workers, see [Understanding Temporal](/evaluate/understanding-temporal#workers) and a [deep dive into Workers](/workers). Create a Worker file (worker/main.go): ```go package main "log" "my-org/greeting" "go.temporal.io/sdk/client" "go.temporal.io/sdk/worker" ) func main() { c, err := client.Dial(client.Options{}) if err != nil { log.Fatalln("Unable to create client", err) } defer c.Close() w := worker.New(c, "my-task-queue", worker.Options{}) w.RegisterWorkflow(greeting.SayHelloWorkflow) w.RegisterActivity(greeting.Greet) err = w.Run(worker.InterruptCh()) if err != nil { log.Fatalln("Unable to start worker", err) } } ``` Run the Worker: ```bash go run worker/main.go ``` ### 4. Execute the Workflow Now that your Worker is running, it's time to start a Workflow Execution. Create a separate file called start/main.go: ```go package main "context" "log" "os" greeting "my-org/greeting" "go.temporal.io/sdk/client" ) func main() { c, err := client.Dial(client.Options{}) if err != nil { log.Fatalln("Unable to create client", err) } defer c.Close() options := client.StartWorkflowOptions{ ID: "greeting-workflow", TaskQueue: "my-task-queue", } we, err := c.ExecuteWorkflow(context.Background(), options, greeting.SayHelloWorkflow, os.Args[1]) if err != nil { log.Fatalln("Unable to execute workflow", err) } log.Println("Started workflow", "WorkflowID", we.GetID(), "RunID", we.GetRunID()) var result string err = we.Get(context.Background(), &result) if err != nil { log.Fatalln("Unable get workflow result", err) } log.Println("Workflow result:", result) } ``` Then run: ```bash go run start/main.go Temporal ``` ### Verify Success If everything is working correctly, you should see: - Worker processing the workflow and activity - Output: `Workflow result: Hello Temporal` - Workflow Execution details in the [Temporal Web UI](http://localhost:8233) Next: Run your first Temporal Application Create a basic Workflow and run it with the Temporal Go SDK --- ## Side Effects - Go SDK Side Effects are used to execute non-deterministic code, such as generating a UUID or a random number, without compromising deterministic in the Workflow. This is done by storing the non-deterministic results of the Side Effect into the Workflow [Event History](/workflow-execution/event#event-history). A Side Effect does not re-execute during a Replay. Instead, it returns the recorded result from the Workflow Execution Event History. Side Effects should not fail. An exception that is thrown from the Side Effect causes failure and retry of the current Workflow Task. An Activity or a Local Activity may also be used instead of a Side effect, as its result is also persisted in Workflow Execution History. :::note You shouldn't modify the Workflow state inside a Side Effect function, because it is not reexecuted during Replay. Side Effect function should be used to return a value. ::: Use the [`SideEffect`](https://pkg.go.dev/go.temporal.io/sdk/workflow#SideEffect) function from the `go.temporal.io/sdk/workflow` package to execute a [Side Effect](/workflow-execution/event#side-effect) directly in your Workflow. Pass it an instance of `context.Context` and the function to execute. The `SideEffect` API returns a Future, an instance of [`converter.EncodedValue`](https://pkg.go.dev/go.temporal.io/sdk/workflow#SideEffect). Use the `Get` method on the Future to retrieve the result of the Side Effect. **Correct implementation** The following example demonstrates the correct way to use `SideEffect`: ```go encodedRandom := workflow.SideEffect(ctx, func(ctx workflow.Context) interface{} { return rand.Intn(100) }) var random int encodedRandom.Get(&random) // ... } ``` **Incorrect implementation** The following example demonstrates how NOT to use `SideEffect`: ```go // Warning: This is an incorrect example. // This code is non-deterministic. var random int workflow.SideEffect(func(ctx workflow.Context) interface{} { random = rand.Intn(100) return nil }) // random will always be 0 in replay, so this code is non-deterministic. ``` On replay the provided function is not executed, the random number will always be 0, and the Workflow Execution could take a different path, breaking determinism. ## Mutable Side Effects {#mutable-side-effects} Mutable Side Effects execute the provided function once, and then it looks up the History of the value with the given Workflow ID. - If there is no existing value, then it records the function result as a value with the given Workflow Id on the History. - If there is an existing value, then it compares whether the existing value from the History has changed from the new function results, by calling the equals function. - If the values are equal, then it returns the value without recording a new Marker Event - If the values aren't equal, then it records the new value with the same ID on the History. :::note During a Workflow Execution, every new Side Effect call results in a new Marker recorded on the Workflow History; whereas Mutable Side Effects only records a new Marker on the Workflow History if the value for the Side Effect ID changes or is set the first time. During a Replay, Mutable Side Effects will not execute the function again. Instead, it returns the exact same value that was returned during the Workflow Execution. ::: To use [`MutableSideEffect()`](https://pkg.go.dev/go.temporal.io/sdk/workflow#MutableSideEffect) in Go, provide a unique name within the scope of the workflow. ```go if err := workflow.MutableSideEffect(ctx, "configureNumber", get, eq).Get(&number); err != nil { panic("can't decode number:" + err.Error()) } ``` --- ## Temporal Client - Go SDK A [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) enables you to communicate with the [Temporal Service](/temporal-service). Communication with a Temporal Service lets you perform actions such as starting Workflow Executions, sending Signals to Workflow Executions, sending Queries to Workflow Executions, getting the results of a Workflow Execution, and providing Activity Task Tokens. This page shows you how to do the following using the Go SDK with the Temporal Client: - [Connect to a local development Temporal Service](#connect-to-development-service) - [Connect to Temporal Cloud](#connect-to-temporal-cloud) - [Start a Workflow Execution](#start-workflow-execution) - [Get Workflow results](#get-workflow-results) :::caution A Temporal Client cannot be initialized and used inside a Workflow. However, it is acceptable and common to use a Temporal Client inside an Activity to communicate with a Temporal Service. ::: ## Connect to development Temporal Service {#connect-to-development-service} Use the [`Dial()`](https://pkg.go.dev/go.temporal.io/sdk/client#Dial) API available in the [`go.temporal.io/sdk/client`](https://pkg.go.dev/go.temporal.io/sdk/client) package to create a [`Client`](https://pkg.go.dev/go.temporal.io/sdk/client#Client). The `Dial()` API expects connection options such as the Temporal Server address, the Namespace to connect to, and Transport Layer Security (TLS) configuration. You can specify these options in the function call, or specify them using environment variables or a configuration file. We recommend you use environment variables or a configuration file to manage these connection options securely. :::info Versioning Requirements Environment variable and configuration file support were added in Go SDK v1.28.0. ::: When you are running a Temporal Service locally, such as the [Temporal CLI](https://docs.temporal.io/cli/server#start-dev), the connection options you must provide are minimal. If you don't provide [`HostPort`](https://pkg.go.dev/go.temporal.io/sdk/internal#ClientOptions), the Client defaults the address and port number to `127.0.0.1:7233`, which is the port of the development Temporal Service. If you don't set a custom Namespace name in the Namespace field, the client connects to the default Namespace. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the configuration file path, the SDK looks for it at the path `~/.config/temporalio/temporal.toml`. For a list of all available configuration options, refer to [Environment Configuration](/references/client-environment-configuration) :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines two profiles: `default` and `prod`. Each profile has its own set of connection options. ```toml --- # Default profile for local development [profile.default] address = "localhost:7233" namespace = "default" --- # Custom gRPC headers [profile.default.grpc_meta] my-custom-header = "development-value" trace-id = "dev-trace-123" --- # Production profile for Temporal Cloud [profile.prod] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" --- # TLS configuration for production [profile.prod.tls] --- # TLS is auto-enabled when this TLS config or API key is present, but you can configure it explicitly --- # Use certificate files for mTLS client_cert_path = "/etc/temporal/certs/client.pem" client_key_path = "/etc/temporal/certs/client.key" --- # Custom headers for production [profile.prod.grpc_meta] environment = "production" service-version = "v1.2.3" ``` You can create a Temporal Client using a specific profile from the configuration file as follows: ```go package main "fmt" "log" "go.temporal.io/sdk/client" "go.temporal.io/sdk/contrib/envconfig" ) func main() { // Load a specific profile from the TOML config file. // This requires a [profile.prod] section in your config. opts, err := envconfig.LoadClientOptions(envconfig.LoadClientOptionsRequest{ ConfigFileProfile: "prod", }) if err != nil { log.Fatalf("Failed to load 'prod' profile: %v", err) } c, err := client.Dial(opts) if err != nil { log.Fatalf("Failed to connect using 'prod' profile: %v", err) } defer c.Close() fmt.Printf("✅ Connected to Temporal namespace %q on %s using 'prod' profile\n", c.Options().Namespace, c.Options().HostPort) } ``` Use the `envconfig` package to set connection options for the Temporal Client using environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](/references/client-environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. ```go package main "fmt" "log" "go.temporal.io/sdk/client" "go.temporal.io/sdk/contrib/envconfig" ) func main() { // Loads the "default" profile from the standard location and environment variables. c, err := client.Dial(envconfig.MustLoadDefaultClientOptions()) if err != nil { log.Fatalf("Failed to create client: %v", err) } defer c.Close() fmt.Printf("✅ Connected to Temporal namespace %q on %s\n", c.Options().Namespace, c.Options().HostPort) } ``` If you don't want to use environment variables or a configuration file, you can specify connection options directly in code. This is convenient for local development and testing. You can also load a base configuration from environment variables or a configuration file, and then override specific options in code. {/* SNIPSTART samples-apps-go-yourapp-gateway {"selectedLines": ["1-23", "32"]} */} [sample-apps/go/yourapp/gateway/main.go](https://github.com/temporalio/documentation/blob/main/sample-apps/go/yourapp/gateway/main.go) ```go package main "context" "encoding/json" "log" "net/http" "documentation-samples-go/yourapp" "go.temporal.io/sdk/client" ) func main() { // Create a Temporal Client to communicate with the Temporal Cluster. // A Temporal Client is a heavyweight object that should be created just once per process. temporalClient, err := client.Dial(client.Options{ HostPort: client.DefaultHostPort, }) if err != nil { log.Fatalln("Unable to create Temporal Client", err) } defer temporalClient.Close() // ... } ``` {/* SNIPEND */} ## Connect to Temporal Cloud {#connect-to-temporal-cloud} You can connect to Temporal Cloud using either an [API key](/cloud/api-keys) or through mTLS. Connection to Temporal Cloud or any secured Temporal Service requires additional connection options compared to connecting to an unsecured local development instance: - Your credentials for authentication. - If you are using an API key, provide the API key value. - If you are using mTLS, provide the mTLS CA certificate and mTLS private key. - Your _Namespace and Account ID_ combination, which follows the format `.`. - The _endpoint_ may vary. The most common endpoint used is the gRPC regional endpoint, which follows the format: `..api.temporal.io:7233`. - For Namespaces with High Availability features with API key authentication enabled, use the gRPC Namespace endpoint: `..tmprl.cloud:7233`. This allows automated failover without needing to switch endpoints. You can find the Namespace and Account ID, as well as the endpoint, on the Namespaces tab: ![The Namespace and Account ID combination on the left, and the regional endpoint on the right](/img/cloud/apikeys/namespaces-and-regional-endpoints.png) You can provide these connection options using environment variables, a configuration file, or directly in code. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. For a list of all available configuration options you can set in the TOML file, refer to [Environment Configuration](/references/client-environment-configuration). You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the path to the configuration file, the SDK looks for it at the default path `~/.config/temporalio/temporal.toml`. :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines a `cloud` profile with the necessary connection options to connect to Temporal Cloud via an API key: ```toml --- # Cloud profile for Temporal Cloud [profile.cloud] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" ``` If you want to use mTLS authentication instead of an API key, replace the `api_key` field with your mTLS certificate and private key: ```toml --- # Cloud profile for Temporal Cloud [profile.cloud] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" tls_client_cert_data = "your-tls-client-cert-data" tls_client_key_path = "your-tls-client-key-path" ``` With the connections options defined in the configuration file, use the `LoadClientOptions` function in the `envconfig` package to create a Temporal Client using the `cloud` profile as follows: ```go {13-17} package main "fmt" "log" "go.temporal.io/sdk/client" "go.temporal.io/sdk/contrib/envconfig" ) func main() { // Replace with the actual path to your TOML file. configFilePath := "/Users/yourname/.config/my-app/temporal.toml" opts, err := envconfig.LoadClientOptions(envconfig.LoadClientOptionsRequest{ ConfigFilePath: configFilePath, ConfigFileProfile: "cloud", }) if err != nil { log.Fatalf("Failed to load client config from custom file: %v", err) } c, err := client.Dial(opts) if err != nil { log.Fatalf("Failed to connect using custom config file: %v", err) } defer c.Close() fmt.Printf("✅ Connected using custom config at: %s\n", configFilePath) } ``` The following environment variables are required to connect to Temporal Cloud: - `TEMPORAL_NAMESPACE`: Your Namespace and Account ID combination in the format `.`. - `TEMPORAL_ADDRESS`: The gRPC endpoint for your Temporal Cloud Namespace. - `TEMPORAL_API_KEY`: Your API key value. Required if you are using API key authentication. - `TEMPORAL_TLS_CLIENT_CERT_DATA` or `TEMPORAL_TLS_CLIENT_CERT_PATH`: Your mTLS client certificate data or file path. Required if you are using mTLS authentication. - `TEMPORAL_TLS_CLIENT_KEY_DATA` or `TEMPORAL_TLS_CLIENT_KEY_PATH`: Your mTLS client private key data or file path. Required if you are using mTLS authentication. Ensure these environment variables exist in your environment before running your Go application. Import the `envconfig` package to set connection options for the Temporal Client using environment variables. The `MustLoadDefaultClientOptions` function will automatically load all environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](../environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. ```go {13} package main "fmt" "log" "go.temporal.io/sdk/client" "go.temporal.io/sdk/contrib/envconfig" ) func main() { // Loads the "default" profile from the standard location and environment variables. c, err := client.Dial(envconfig.MustLoadDefaultClientOptions()) if err != nil { log.Fatalf("Failed to create client: %v", err) } defer c.Close() fmt.Printf("✅ Connected to Temporal Service") } ``` You can also provide connections options in your Go code directly. When instantiating a Temporal `client` in your Temporal Go SDK code, provide the following `clientOptions`: ```go clientOptions := client.Options{ HostPort: , Namespace: ., ConnectionOptions: client.ConnectionOptions{TLS: &tls.Config{}}, Credentials: client.NewAPIKeyStaticCredentials(apiKey), } c, err := client.Dial(clientOptions) ``` To update an API key, use the Go `context` object: ```go // Assuming client Credentials created with var myKey string creds := client.NewAPIKeyDynamicCredentials( func(context.Context) (string, error) { return myKey, nil }) // Update by replacing myKey = myKeyUpdated ``` You can use a combination of environment variables, configuration files, and code to set connection options. For example, you can load a base configuration from environment variables or a configuration file, and then override specific options in code. For example, the following code snippet loads the base configuration from environment variables and the default profile with `envconfig.MustLoadDefaultClientOptions()`. It then overrides the `HostPort` and `Namespace` options programmatically. Refer to [Client Options type in the Go SDK](https://pkg.go.dev/go.temporal.io/sdk/internal#ClientOptions) for a list of all available connection options you can override. ```go package main "fmt" "log" "go.temporal.io/sdk/client" "go.temporal.io/sdk/contrib/envconfig" ) func main() { // Load the base configuration (e.g., from the default profile). opts := envconfig.MustLoadDefaultClientOptions() // Apply overrides programmatically. opts.HostPort = "localhost:7233" opts.Namespace = "test-namespace" c, err := client.Dial(opts) if err != nil { log.Fatalf("Failed to connect with overridden options: %v", err) } defer c.Close() fmt.Printf("✅ Connected with overridden config to: %s in namespace: %s\n", opts.HostPort, opts.Namespace) } ```
v1.26.0 to v1.32.x Create an initial connection: ```go clientOptions := client.Options{ HostPort: , Namespace: ., ConnectionOptions: client.ConnectionOptions{ TLS: &tls.Config{}, DialOptions: []grpc.DialOption{ grpc.WithUnaryInterceptor( func(ctx context.Context, method string, req any, reply any, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error { return invoker( metadata.AppendToOutgoingContext(ctx, "temporal-namespace", .), method, req, reply, cc, opts..., ) }, ), }, }, Credentials: client.NewAPIKeyStaticCredentials(apiKey), } c, err := client.Dial(clientOptions) if err != nil { log.Fatalf("error creating temporal client: %v", err) } ``` Update an API key: ```go // Assuming client Credentials created with var myKey string creds := client.NewAPIKeyDynamicCredentials( func(context.Context) (string, error) { return myKey, nil }) // Just update by replacing myKey = myKeyUpdated ```
pre v1.26.0 Create an initial connection: ```go // Create headers provider type APIKeyProvider struct { APIKey string Namespace string } func (a *APIKeyProvider) GetHeaders(context.Context) (map[string]string, error) { return map[string]string{"Authorization": "Bearer " + a.APIKey, "temporal-namespace": a.Namespace}, nil } // Use headers provider apiKeyProvider := &APIKeyProvider{APIKey: , Namespace: .} c, err := client.Dial(client.Options{ HostPort: , Namespace: ., HeadersProvider: apiKeyProvider, ConnectionOptions: client.ConnectionOptions{TLS: &tls.Config{ }}, }) ``` Update an API key: ```go apiKeyProvider.APIKey = myKeyUpdated ```
## Start Workflow Execution {#start-workflow-execution} **How to start a Workflow Execution using the Go SDK** [Workflow Execution](/workflow-execution) semantics rely on several parameters—that is, to start a Workflow Execution you must supply a Task Queue that will be used for the Tasks (one that a Worker is polling), the Workflow Type, language-specific contextual data, and Workflow Function parameters. In the examples below, all Workflow Executions are started using a Temporal Client. To spawn Workflow Executions from within another Workflow Execution, use either the [Child Workflow](/develop/go/child-workflows) or External Workflow APIs. See the [Customize Workflow Type](/develop/go/core-application#customize-workflow-type) section to see how to customize the name of the Workflow Type. A request to spawn a Workflow Execution causes the Temporal Service to create the first Event ([WorkflowExecutionStarted](/references/events#workflowexecutionstarted)) in the Workflow Execution Event History. The Temporal Service then creates the first Workflow Task, resulting in the first [WorkflowTaskScheduled](/references/events#workflowtaskscheduled) Event. To spawn a [Workflow Execution](/workflow-execution), use the `ExecuteWorkflow()` method on the Go SDK [`Client`](https://pkg.go.dev/go.temporal.io/sdk/client#Client). The `ExecuteWorkflow()` API call requires an instance of [`context.Context`](https://pkg.go.dev/context#Context), an instance of [`StartWorkflowOptions`](https://pkg.go.dev/go.temporal.io/sdk/client#StartWorkflowOptions), a Workflow Type name, and all variables to be passed to the Workflow Execution. The `ExecuteWorkflow()` call returns a Future, which can be used to get the result of the Workflow Execution. ```go package main // ... "go.temporal.io/sdk/client" ) func main() { temporalClient, err := client.Dial(client.Options{}) if err != nil { // ... } defer temporalClient.Close() // ... workflowOptions := client.StartWorkflowOptions{ // ... } workflowRun, err := temporalClient.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition, param) if err != nil { // ... } // ... } func YourWorkflowDefinition(ctx workflow.Context, param YourWorkflowParam) (YourWorkflowResponse, error) { // ... } ``` If the invocation process has access to the function directly, then the Workflow Type name parameter can be passed as if the function name were a variable, without quotations. ```go workflowRun, err := temporalClient.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition, param) ``` If the invocation process does not have direct access to the statically defined Workflow Definition, for example, if the Workflow Definition is in an un-importable package, or it is written in a completely different language, then the Workflow Type can be provided as a `string`. ```go workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, "YourWorkflowDefinition", param) ``` ### Set Workflow Task Queue {#set-task-queue} **How to set a Workflow's Task Queue using the Go SDK** In most SDKs, the only Workflow Option that must be set is the name of the [Task Queue](/task-queue). For any code to execute, a Worker Process must be running that contains a Worker Entity that is polling the same Task Queue name. Create an instance of [`StartWorkflowOptions`](https://pkg.go.dev/go.temporal.io/sdk@v1.10.0/client#StartWorkflowOptions) from the `go.temporal.io/sdk/client` package, set the `TaskQueue` field, and pass the instance to the `ExecuteWorkflow` call. - Type: `string` - Default: None, this is a required field to be set by the developer ```go workflowOptions := client.StartWorkflowOptions{ // ... TaskQueue: "your-task-queue", // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` You can configure Task Queues that are host-specific, Worker-specific or Workflow-specific to distribute your application load. For more information, refer to [Task Queues Processing Tuning](/develop/worker-performance#task-queues-processing-tuning) and [Worker Versioning](https://docs.temporal.io/worker-versioning). ### Set custom Workflow Id {#workflow-id} **How to set a custom Workflow Id using the Go SDK** Although it is not required, we recommend providing your own [Workflow Id](/workflow-execution/workflowid-runid#workflow-id) that maps to a business process or business entity identifier, such as an order identifier or customer identifier. Create an instance of [`StartWorkflowOptions`](https://pkg.go.dev/go.temporal.io/sdk@v1.10.0/client#StartWorkflowOptions) from the `go.temporal.io/sdk/client` package, set the `ID` field, and pass the instance to the `ExecuteWorkflow` call. - Type: `string` - Default: System generated UUID ```go workflowOptions := client.StartWorkflowOptions{ // ... ID: "Your-Custom-Workflow-Id", // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` ### Go StartWorkflowOptions reference {#workflow-options-reference} Create an instance of [`StartWorkflowOptions`](https://pkg.go.dev/go.temporal.io/sdk@v1.10.0/client#StartWorkflowOptions) from the `go.temporal.io/sdk/client` package, and pass the instance to the `ExecuteWorkflow` call. The following fields are available: | Field | Required | Type | | --------------------------------------------------------------------------------------- | -------- | ----------------------------------------------------------------------------------------------- | | [`ID`](#id) | No | `string` | | [`TaskQueue`](#taskqueue) | **Yes** | `string` | | [`WorkflowExecutionTimeout`](#workflowexecutiontimeout) | No | `time.Duration` | | [`WorkflowRunTimeout`](#workflowruntimeout) | No | `time.Duration` | | [`WorkflowTaskTimeout`](#workflowtasktimeout) | No | `time.Duration` | | [`WorkflowIDReusePolicy`](#workflowidreusepolicy) | No | [`WorkflowIdReusePolicy`](https://pkg.go.dev/go.temporal.io/api/enums/v1#WorkflowIdReusePolicy) | | [`WorkflowExecutionErrorWhenAlreadyStarted`](#workflowexecutionerrorwhenalreadystarted) | No | `bool` | | [`RetryPolicy`](#retrypolicy) | No | [`RetryPolicy`](https://pkg.go.dev/go.temporal.io/sdk/temporal#RetryPolicy) | | [`CronSchedule`](#cronschedule) | No | `string` | | [`Memo`](#memo) | No | `map[string]interface{}` | | [`SearchAttributes`](#searchattributes) | No | `map[string]interface{}` | #### ID Although it is not required, we recommend providing your own [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)that maps to a business process or business entity identifier, such as an order identifier or customer identifier. Create an instance of [StartWorkflowOptions](https://pkg.go.dev/go.temporal.io/sdk@v1.10.0/client#StartWorkflowOptions) from the `go.temporal.io/sdk/client` package, set the `ID` field, and pass the instance to the `ExecuteWorkflow` call. - Type: `string` - Default: System generated UUID ```go workflowOptions := client.StartWorkflowOptions{ // ... ID: "Your-Custom-Workflow-Id", // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` #### TaskQueue Create an instance of [StartWorkflowOptions](https://pkg.go.dev/go.temporal.io/sdk@v1.10.0/client#StartWorkflowOptions) from the `go.temporal.io/sdk/client` package, set the `TaskQueue` field, and pass the instance to the `ExecuteWorkflow` call. - Type: `string` - Default: None; this is a required field to be set by the developer ```go workflowOptions := client.StartWorkflowOptions{ // ... TaskQueue: "your-task-queue", // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` #### WorkflowExecutionTimeout Create an instance of [StartWorkflowOptions](https://pkg.go.dev/go.temporal.io/sdk/client#StartWorkflowOptions) from the `go.temporal.io/sdk/client` package, set the `WorkflowExecutionTimeout` field, and pass the instance to the `ExecuteWorkflow` call. - Type: `time.Duration` - Default: Unlimited ```go workflowOptions := client.StartWorkflowOptions{ // ... WorkflowExecutionTimeout: time.Hours * 24 * 365 * 10, // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` #### WorkflowRunTimeout Create an instance of [StartWorkflowOptions](https://pkg.go.dev/go.temporal.io/sdk/client#StartWorkflowOptions) from the `go.temporal.io/sdk/client` package, set the `WorkflowRunTimeout` field, and pass the instance to the `ExecuteWorkflow` call. - Type: `time.Duration` - Default: Same as [`WorkflowExecutionTimeout`](#workflowexecutiontimeout) ```go workflowOptions := client.StartWorkflowOptions{ WorkflowRunTimeout: time.Hours * 24 * 365 * 10, // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` #### WorkflowTaskTimeout Create an instance of [StartWorkflowOptions](https://pkg.go.dev/go.temporal.io/sdk/client#StartWorkflowOptions) from the `go.temporal.io/sdk/client` package, set the `WorkflowTaskTimeout` field, and pass the instance to the `ExecuteWorkflow` call. - Type: `time.Duration` - Default: `time.Seconds * 10` ```go workflowOptions := client.StartWorkflowOptions{ WorkflowTaskTimeout: time.Second * 10, //... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` #### WorkflowIDReusePolicy - Type: [WorkflowIdReusePolicy](https://pkg.go.dev/go.temporal.io/api/enums/v1#WorkflowIdReusePolicy) - Default: `enums.WORKFLOW_ID_REUSE_POLICY_ALLOW_DUPLICATE` Set a value from the `go.temporal.io/api/enums/v1` package. ```go workflowOptions := client.StartWorkflowOptions{ WorkflowIdReusePolicy: enums.WORKFLOW_ID_REUSE_POLICY_ALLOW_DUPLICATE, // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` #### WorkflowExecutionErrorWhenAlreadyStarted - Type: `bool` - Default: `false` ```go workflowOptions := client.StartWorkflowOptions{ WorkflowExecutionErrorWhenAlreadyStarted: false, // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` #### RetryPolicy Create an instance of a [RetryPolicy](https://pkg.go.dev/go.temporal.io/sdk/temporal#RetryPolicy) from the `go.temporal.io/sdk/temporal` package and provide it as the value to the `RetryPolicy` field of the instance of `StartWorkflowOptions`. - Type: [RetryPolicy](https://pkg.go.dev/go.temporal.io/sdk/temporal#RetryPolicy) - Default: None ```go retrypolicy := &temporal.RetryPolicy{ InitialInterval: time.Second, BackoffCoefficient: 2.0, MaximumInterval: time.Second * 100, } workflowOptions := client.StartWorkflowOptions{ RetryPolicy: retrypolicy, // ... } workflowRun, err := temporalClient.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` #### CronSchedule - Type: `string` - Default: None ```go workflowOptions := client.StartWorkflowOptions{ CronSchedule: "15 8 * * *", // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` [Sample](https://github.com/temporalio/samples-go/tree/master/cron) #### Memo - Type: `map[string]interface{}` - Default: Empty ```go workflowOptions := client.StartWorkflowOptions{ Memo: map[string]interface{}{ "description": "Test search attributes workflow", }, // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` #### SearchAttributes **How to set Workflow Execution Search Attributes in Go** - Type: `map[string]interface{}` - Default: Empty. These are the corresponding [Search Attribute value types](/search-attribute#supported-types) in Go: - Keyword = string - Int = int64 - Double = float64 - Bool = bool - Datetime = time.Time - Text = string ```go searchAttributes := map[string]interface{}{ "CustomIntField": 1, "MiscData": "yellow", } workflowOptions := client.StartWorkflowOptions{ SearchAttributes: searchAttributes, // ... } workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition) if err != nil { // ... } ``` ### Get Workflow results {#get-workflow-results} **How to get the results of a Workflow Execution using the Go SDK** If the call to start a Workflow Execution is successful, you will gain access to the Workflow Execution's Run Id. The Workflow Id, Run Id, and Namespace may be used to uniquely identify a Workflow Execution in the system and get its result. It's possible to both block progress on the result (synchronous execution) or get the result at some other point in time (asynchronous execution). In the Temporal Platform, it's also acceptable to use Queries as the preferred method for accessing the state and results of Workflow Executions. The `ExecuteWorkflow` call returns an instance of [`WorkflowRun`](https://pkg.go.dev/go.temporal.io/sdk/client#WorkflowRun), which is the `workflowRun` variable in the following line. ```go workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, app.YourWorkflowDefinition, param) if err != nil { // ... } // ... } ``` The instance of `WorkflowRun` has the following three methods: - `GetWorkflowID()`: Returns the Workflow Id of the invoked Workflow Execution. - `GetRunID()`: Always returns the Run Id of the initial Run (See [Continue As New](#)) in the series of Runs that make up the full Workflow Execution. - `Get`: Takes a pointer as a parameter and populates the associated variable with the Workflow Execution result. To wait on the result of Workflow Execution in the same process that invoked it, call `Get()` on the instance of `WorkflowRun` that is returned by the `ExecuteWorkflow()` call. ```go workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, YourWorkflowDefinition, param) if err != nil { // ... } var result YourWorkflowResponse err = workflowRun.Get(context.Background(), &result) if err != nil { // ... } // ... } ``` However, the result of a Workflow Execution can be obtained from a completely different process. All that is needed is the [Workflow Id](#). (A [Run Id](#) is optional if more than one closed Workflow Execution has the same Workflow Id.) The result of the Workflow Execution is available for as long as the Workflow Execution Event History remains in the system. {/* TODO (See [How long do Workflow Execution Histories persist](#)). */} Call the `GetWorkflow()` method on an instance of the Go SDK Client and pass it the Workflow Id used to spawn the Workflow Execution. Then call the `Get()` method on the instance of `WorkflowRun` that is returned, passing it a pointer to populate the result. ```go // ... workflowID := "Your-Custom-Workflow-Id" workflowRun := c.GetWorkflow(context.Background, workflowID) var result YourWorkflowResponse err = workflowRun.Get(context.Background(), &result) if err != nil { // ... } // ... ``` **Get last completion result** In the case of a [Temporal Cron Job](/cron-job), you might need to get the result of the previous Workflow Run and use it in the current Workflow Run. To do this, use the [`HasLastCompletionResult`](https://pkg.go.dev/go.temporal.io/sdk/workflow#HasLastCompletionResult) and [`GetLastCompletionResult`](https://pkg.go.dev/go.temporal.io/sdk/workflow#GetLastCompletionResult) APIs, available from the [`go.temporal.io/sdk/workflow`](https://pkg.go.dev/go.temporal.io/sdk/workflow) package, directly in your Workflow code. ```go type CronResult struct { Count int } func YourCronWorkflowDefinition(ctx workflow.Context) (CronResult, error) { count := 1 if workflow.HasLastCompletionResult(ctx) { var lastResult CronResult if err := workflow.GetLastCompletionResult(ctx, &lastResult); err == nil { count = count + lastResult.Count } } newResult := CronResult { Count: count, } return newResult, nil } ``` This will work even if one of the cron Workflow Runs fails. The next Workflow Run gets the result of the last successfully Completed Workflow Run. --- ## Temporal Nexus - Go SDK Feature Guide :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Go SDK support for Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability). ::: Use [Temporal Nexus](/evaluate/nexus) to connect Temporal Applications within and across Namespaces using a Nexus Endpoint, a Nexus Service contract, and Nexus Operations. This page shows how to do the following: - [Run a development Temporal Service with Nexus enabled](#run-the-temporal-nexus-development-server) - [Create caller and handler Namespaces](#create-caller-handler-namespaces) - [Create a Nexus Endpoint to route requests from caller to handler](#create-nexus-endpoint) - [Define the Nexus Service contract](#define-nexus-service-contract) - [Develop a Nexus Service and Operation handlers](#develop-nexus-service-operation-handlers) - [Develop a caller Workflow that uses a Nexus Service](#develop-caller-workflow-nexus-service) - [Make Nexus calls across Namespaces with a development Server](#nexus-calls-across-namespaces-dev-server) - [Make Nexus calls across Namespaces in Temporal Cloud](#nexus-calls-across-namespaces-temporal-cloud) :::note This documentation uses source code derived from the [Go Nexus sample](https://github.com/temporalio/samples-go/tree/main/nexus). ::: ## Run the Temporal Development Server with Nexus enabled {#run-the-temporal-nexus-development-server} Prerequisites: - [Install the latest Temporal CLI](https://docs.temporal.io/develop/go/core-application#run-a-development-server) (v1.3.0 or higher recommended) - [Install the latest Temporal Go SDK](https://docs.temporal.io/develop/go/core-application#install-a-temporal-sdk) (v1.33.0 or higher recommended) The first step in working with Temporal Nexus involves starting a Temporal server with Nexus enabled. ``` temporal server start-dev ``` This command automatically starts the Temporal development server with the Web UI, and creates the `default` Namespace. It uses an in-memory database, so do not use it for real use cases. The Temporal Web UI should now be accessible at [http://localhost:8233](http://localhost:8233), and the Temporal Server should now be available for client connections on `localhost:7233`. ## Create caller and handler Namespaces {#create-caller-handler-namespaces} Before setting up Nexus endpoints, create separate Namespaces for the caller and handler. ``` temporal operator namespace create --namespace my-target-namespace temporal operator namespace create --namespace my-caller-namespace ``` `my-target-namespace` will contain the Nexus Operation handler, and we will use a Workflow in `my-caller-namespace` to call that Operation handler. We use different namespaces to demonstrate cross-Namespace Nexus calls. ## Create a Nexus Endpoint to route requests from caller to handler {#create-nexus-endpoint} After establishing caller and handler Namespaces, the next step is to create a Nexus Endpoint to route requests. ``` temporal operator nexus endpoint create \ --name my-nexus-endpoint-name \ --target-namespace my-target-namespace \ --target-task-queue my-handler-task-queue ``` You can also use the Web UI to create the Namespaces and Nexus endpoint. ## Define the Nexus Service contract {#define-nexus-service-contract} Defining a clear contract for the Nexus Service is crucial for smooth communication. In this example, there is a service package that describes the Service and Operation names along with input/output types for caller Workflows to use the Nexus Endpoint. Each [Temporal SDK includes and uses a default Data Converter](https://docs.temporal.io/dataconversion). The default data converter encodes payloads in the following order: Null, Byte array, Protobuf JSON, and JSON. In a polyglot environment, that is where more than one language and SDK is being used to develop a Temporal solution, Protobuf and JSON are common choices. This example uses native Go types. [nexus/service/api.go](https://github.com/temporalio/samples-go/blob/main/nexus/service/api.go) ```go // ... const HelloServiceName = "my-hello-service" // Echo operation const EchoOperationName = "echo" type EchoInput struct { Message string } type EchoOutput EchoInput ``` ## Develop a Nexus Service and Operation handlers {#develop-nexus-service-operation-handlers} Nexus Operation handlers are typically defined in the same Worker as the underlying Temporal primitives they abstract. Operation handlers can decide if a given Nexus Operation will be synchronous or asynchronous. They can execute arbitrary code, and invoke underlying Temporal primitives such as a Workflow, Query, Signal, or Update. The `temporalnexus` package has builders to create Nexus Operations and other helpers for authoring Operation handlers: - `NewWorkflowRunOperation` \- Run a Workflow as an asynchronous Nexus Operation - `GetClient` \- Get the Temporal Client that the Worker was initialized with for synchronous handlers backed by Temporal primitives such as Signals and Queries This tutorial starts with a sync Operation handler example using the `nexus.NewSyncOperation` method, and then shows how to create an async Operation handler that uses `NewWorkflowRunOperation` to start a handler Workflow from a Nexus Operation. ### Develop a Synchronous Nexus Operation handler The `nexus.NewSyncOperation` builder function is for exposing simple RPC handlers. Typically to use SDK client, which is obtained via `temporalnexus.GetClient(ctx)`, for signaling, querying, and listing Workflows. However, implementations are free to make arbitrary calls to other services or databases, or perform computations such as this one: [nexus/handler/app.go](https://github.com/temporalio/samples-go/blob/main/nexus/handler/app.go) ```go // ... "context" "fmt" "github.com/nexus-rpc/sdk-go/nexus" "go.temporal.io/sdk/client" "go.temporal.io/sdk/temporalnexus" "go.temporal.io/sdk/workflow" "github.com/temporalio/samples-go/nexus/service" ) // NewSyncOperation is a meant for exposing simple RPC handlers. var EchoOperation = nexus.NewSyncOperation(service.EchoOperationName, func(ctx context.Context, input service.EchoInput, options nexus.StartOperationOptions) (service.EchoOutput, error) { // Use temporalnexus.GetClient to get the client that the worker was initialized with to perform client calls // such as signaling, querying, and listing workflows. Implementations are free to make arbitrary calls to other // services or databases, or perform simple computations such as this one. return service.EchoOutput(input), nil }) ``` ### Develop an Asynchronous Nexus Operation handler to start a Workflow Use the `NewWorkflowRunOperation` constructor, which is the easiest way to expose a Workflow as an operation. See alternatives [here](https://pkg.go.dev/go.temporal.io/sdk/temporalnexus). [nexus/handler/app.go](https://github.com/temporalio/samples-go/blob/main/nexus/handler/app.go) ```go // ... var HelloOperation = temporalnexus.NewWorkflowRunOperation(service.HelloOperationName, HelloHandlerWorkflow, func(ctx context.Context, input service.HelloInput, options nexus.StartOperationOptions) (client.StartWorkflowOptions, error) { return client.StartWorkflowOptions{ // Workflow IDs should typically be business meaningful IDs and are used to dedupe workflow starts. // For this example, we're using the request ID allocated by Temporal when the caller workflow schedules // the operation, this ID is guaranteed to be stable across retries of this operation. ID: options.RequestID, // Task queue defaults to the task queue this operation is handled on. }, nil }) ``` Workflow IDs should typically be business-meaningful IDs and are used to dedupe Workflow starts. For the `HelloOperation`, `input.ID` is passed as part of the Nexus Service contract. :::tip RESOURCES [Attach multiple Nexus callers to a handler Workflow](/nexus/operations#attaching-multiple-nexus-callers) with a Conflict-Policy of Use-Existing. ::: #### Map a Nexus Operation input to multiple Workflow arguments A Nexus Operation can only take one input parameter. If you want a Nexus Operation to start a Workflow that takes multiple arguments use `NewWorkflowRunOperationWithOptions` or `MustNewWorkflowRunOperationWithOptions`. [nexus-multiple-arguments/handler/app.go](https://github.com/temporalio/samples-go/blob/main/nexus-multiple-arguments/handler/app.go) ```go var HelloOperation = temporalnexus.MustNewWorkflowRunOperationWithOptions(temporalnexus.WorkflowRunOperationOptions[service.HelloInput, service.HelloOutput]{ Name: service.HelloOperationName, Handler: func(ctx context.Context, input service.HelloInput, options nexus.StartOperationOptions) (temporalnexus.WorkflowHandle[service.HelloOutput], error) { return temporalnexus.ExecuteUntypedWorkflow[service.HelloOutput]( ctx, options, client.StartWorkflowOptions{ // Workflow IDs should typically be business meaningful IDs and are used to dedupe workflow starts. // For this example, we're using the request ID allocated by Temporal when the caller workflow schedules // the operation, this ID is guaranteed to be stable across retries of this operation. ID: options.RequestID, }, HelloHandlerWorkflow, input.Name, input.Language, ) }, }) ``` ### Register a Nexus Service in a Worker After developing an asynchronous Nexus Operation handler to start a Workflow, the next step is to register a Nexus Service in a Worker. [nexus/handler/worker/main.go](https://github.com/temporalio/samples-go/blob/main/nexus/handler/worker/main.go) ```go package main "log" "os" "go.temporal.io/sdk/client" "go.temporal.io/sdk/worker" "github.com/nexus-rpc/sdk-go/nexus" "github.com/temporalio/samples-go/nexus/handler" "github.com/temporalio/samples-go/nexus/options" "github.com/temporalio/samples-go/nexus/service" ) const ( taskQueue = "my-handler-task-queue" ) func main() { // The client and worker are heavyweight objects that should be created once per process. clientOptions, err := options.ParseClientOptionFlags(os.Args[1:]) if err != nil { log.Fatalf("Invalid arguments: %v", err) } c, err := client.Dial(clientOptions) if err != nil { log.Fatalln("Unable to create client", err) } defer c.Close() w := worker.New(c, taskQueue, worker.Options{}) service := nexus.NewService(service.HelloServiceName) err = service.Register(handler.EchoOperation, handler.HelloOperation) if err != nil { log.Fatalln("Unable to register operations", err) } w.RegisterNexusService(service) w.RegisterWorkflow(handler.HelloHandlerWorkflow) err = w.Run(worker.InterruptCh()) if err != nil { log.Fatalln("Unable to start worker", err) } } ``` ## Develop a caller Workflow that uses the Nexus Service {#develop-caller-workflow-nexus-service} Import the Service API package that has the necessary service and operation names and input/output types to execute a Nexus Operation from the caller Workflow: [nexus/caller/workflows.go](https://github.com/temporalio/samples-go/blob/main/nexus/caller/workflows.go) ```go package caller "github.com/temporalio/samples-go/nexus/service" "go.temporal.io/sdk/workflow" ) const ( TaskQueue = "my-caller-workflow-task-queue" endpointName = "my-nexus-endpoint-name" ) func EchoCallerWorkflow(ctx workflow.Context, message string) (string, error) { c := workflow.NewNexusClient(endpointName, service.HelloServiceName) fut := c.ExecuteOperation(ctx, service.EchoOperationName, service.EchoInput{Message: message}, workflow.NexusOperationOptions{}) var res service.EchoOutput if err := fut.Get(ctx, &res); err != nil { return "", err } return res.Message, nil } func HelloCallerWorkflow(ctx workflow.Context, name string, language service.Language) (string, error) { c := workflow.NewNexusClient(endpointName, service.HelloServiceName) fut := c.ExecuteOperation(ctx, service.HelloOperationName, service.HelloInput{Name: name, Language: language}, workflow.NexusOperationOptions{}) var res service.HelloOutput // Optionally wait for the operation to be started. NexusOperationExecution will contain the operation token in // case this operation is asynchronous, which is a handle that can be used to perform additional actions like // cancelling an operation. var exec workflow.NexusOperationExecution if err := fut.GetNexusOperationExecution().Get(ctx, &exec); err != nil { return "", err } if err := fut.Get(ctx, &res); err != nil { return "", err } return res.Message, nil } ``` ### Register the caller Workflow in a Worker After developing the caller Workflow, the next step is to register it with a Worker. [nexus/caller/worker/main.go](https://github.com/temporalio/samples-go/blob/main/nexus/caller/worker/main.go) ```go package main "log" "os" "github.com/temporalio/samples-go/nexus/caller" "github.com/temporalio/samples-go/nexus/options" "go.temporal.io/sdk/client" "go.temporal.io/sdk/worker" ) func main() { // The client and worker are heavyweight objects that should be created once per process. clientOptions, err := options.ParseClientOptionFlags(os.Args[1:]) if err != nil { log.Fatalf("Invalid arguments: %v", err) } c, err := client.Dial(clientOptions) if err != nil { log.Fatalln("Unable to create client", err) } defer c.Close() w := worker.New(c, caller.TaskQueue, worker.Options{}) w.RegisterWorkflow(caller.EchoCallerWorkflow) w.RegisterWorkflow(caller.HelloCallerWorkflow) err = w.Run(worker.InterruptCh()) if err != nil { log.Fatalln("Unable to start worker", err) } } ``` ### Develop a starter to start the caller Workflow To initiate the caller Workflow, a starter program is required. [nexus/caller/starter/main.go](https://github.com/temporalio/samples-go/blob/main/nexus/caller/starter/main.go) ```go package main "context" "log" "os" "time" "go.temporal.io/sdk/client" "github.com/temporalio/samples-go/nexus/caller" "github.com/temporalio/samples-go/nexus/options" "github.com/temporalio/samples-go/nexus/service" ) func main() { clientOptions, err := options.ParseClientOptionFlags(os.Args[1:]) if err != nil { log.Fatalf("Invalid arguments: %v", err) } c, err := client.Dial(clientOptions) if err != nil { log.Fatalln("Unable to create client", err) } defer c.Close() runWorkflow(c, caller.EchoCallerWorkflow, "Nexus Echo 👋") runWorkflow(c, caller.HelloCallerWorkflow, "Nexus", service.ES) } func runWorkflow(c client.Client, workflow interface{}, args ...interface{}) { ctx := context.Background() workflowOptions := client.StartWorkflowOptions{ ID: "nexus_hello_caller_workflow_" + time.Now().Format("20060102150405"), TaskQueue: caller.TaskQueue, } wr, err := c.ExecuteWorkflow(ctx, workflowOptions, workflow, args...) if err != nil { log.Fatalln("Unable to execute workflow", err) } log.Println("Started workflow", "WorkflowID", wr.GetID(), "RunID", wr.GetRunID()) // Synchronously wait for the workflow completion. var result string err = wr.Get(context.Background(), &result) if err != nil { log.Fatalln("Unable get workflow result", err) } log.Println("Workflow result:", result) } ``` ## Make Nexus calls across Namespaces with a development Server {#nexus-calls-across-namespaces-dev-server} Follow the steps below to run the Nexus handler Worker, the Nexus caller Worker, and the starter. ### Run Workers connected to a local development server Run the Nexus handler Worker: ``` cd handler go run ./worker \ -target-host localhost:7233 \ -namespace my-target-namespace ``` In another terminal window, run the Nexus caller Worker: ``` cd caller go run ./worker \ -target-host localhost:7233 \ -namespace my-caller-namespace ``` ### Start a caller Workflow With the Workers running, the final step in the local development process is to start a caller Workflow. Run the starter: ``` cd caller go run ./starter \ -target-host localhost:7233 \ -namespace my-caller-namespace ``` This will result in: ``` 2024/10/04 19:57:40 Workflow result: Nexus Echo 👋 2024/10/04 19:57:40 Started workflow WorkflowID nexus_hello_caller_workflow_20240723195740 RunID c9789128-2fcd-4083-829d-95e43279f6d7 2024/10/04 19:57:40 Workflow result: ¡Hola! Nexus 👋 ``` ### Canceling a Nexus Operation {#canceling-a-nexus-operation} To cancel a Nexus Operation from within a Workflow, create a Go context using the `workflow.WithCancel` API. This returns a new context and a function that, when called, cancels the context and any SDK method that was passed this context. The future returned by `NexusClient.ExecuteOperation` is resolved when the operation finishes, whether it succeeds, fails, times out, or is canceled. Only asynchronous operations can be canceled in Nexus, as cancelation is sent using an operation token. The Workflow or other resources backing the operation may choose to ignore the cancelation request. If ignored, the operation may enter a terminal state. Once the caller Workflow completes, the caller's Nexus Machinery will not make any further attempts to cancel operations that are still running. It's okay to leave operations running in some use cases. To ensure cancelations are delivered, wait for all pending operations to finish before exiting the Workflow. See the [Nexus cancelation sample](https://github.com/temporalio/samples-go/tree/main/nexus-cancelation) for reference. ## Make Nexus calls across Namespaces in Temporal Cloud {#nexus-calls-across-namespaces-temporal-cloud} This section assumes you are already familiar with [how connect a Worker to Temporal Cloud](https://docs.temporal.io/develop/go/core-application#run-a-temporal-cloud-worker). The same [source code](https://github.com/temporalio/samples-go/tree/main/nexus) is used in this section, but the `tcld` CLI will be used to create Namespaces and the Nexus Endpoint, and mTLS client certificates will be used to securely connect the caller and handler Workers to their respective Temporal Cloud Namespaces. ### Install the latest `tcld` CLI and generate certificates To install the latest version of the `tcld` CLI, run the following command (on MacOS): ``` brew install temporalio/brew/tcld ``` If you don't already have certificates, you can generate them for mTLS Worker authentication using the command below: ``` tcld gen ca --org $YOUR_ORG_NAME --validity-period 1y --ca-cert ca.pem --ca-key ca.key ``` These certificates will be valid for one year. ### Create caller and handler Namespaces Before deploying to Temporal Cloud, ensure that the appropriate Namespaces are created for both the caller and handler. If you already have these Namespaces, you don't need to do this. ``` tcld login tcld namespace create \ --namespace \ --cloud-provider aws \ --region us-west-2 \ --ca-certificate-file 'path/to/your/ca.pem' \ --retention-days 1 tcld namespace create \ --namespace \ --cloud-provider aws \ --region us-west-2 \ --ca-certificate-file 'path/to/your/ca.pem' \ --retention-days 1 ``` Alternatively, you can create Namespaces through the UI: [https://cloud.temporal.io/Namespaces](https://cloud.temporal.io/Namespaces). ### Create a Nexus Endpoint to route requests from caller to handler To create a Nexus Endpoint you must have a Developer account role or higher, and have NamespaceAdmin permission on the `--target-namespace`. ``` tcld nexus endpoint create \ --name \ --target-task-queue my-handler-task-queue \ --target-namespace \ --allow-namespace \ --description-file description.md ``` The `--allow-namespace` is used to build an Endpoint allowlist of caller Namespaces that can use the Nexus Endpoint, as described in Runtime Access Control. Alternatively, you can create a Nexus Endpoint through the UI: [https://cloud.temporal.io/nexus](https://cloud.temporal.io/nexus). ### Run Workers Connected to Temporal Cloud with TLS certificates Run the handler Worker: ``` cd handler go run ./worker \ -target-host .tmprl.cloud:7233 \ -namespace \ -client-cert 'path/to/your/ca.pem' \ -client-key 'path/to/your/ca.key' ``` Run the caller Worker: ``` cd caller go run ./worker \ -target-host .tmprl.cloud:7233 \ -namespace \ -client-cert 'path/to/your/ca.pem' \ -client-key 'path/to/your/ca.key' ``` ### Start a caller Workflow ``` cd caller go run ./starter \ -target-host .tmprl.cloud:7233 \ -namespace \ -client-cert 'path/to/your/ca.pem' \ -client-key 'path/to/your/ca.key' ``` This will result in: ``` 2024/10/04 19:57:40 Workflow result: Nexus Echo 👋 2024/10/04 19:57:40 Workflow result: ¡Hola! Nexus 👋 ``` ### Run Workers Connected to Temporal Cloud with API keys [View the source code](https://github.com/temporalio/samples-go/tree/main/nexus) in the context of the rest of the application code. Run the handler Worker: ``` cd handler go run ./worker \ -target-host ..api.temporal.io:7233 \ -namespace \ -api-key ``` Run the caller Worker: ``` cd caller go run ./worker \ -target-host ..api.temporal.io:7233 \ -namespace \ -api-key ``` ### Start a caller Workflow ``` cd caller go run ./starter \ -target-host ..api.temporal.io:7233 \ -namespace \ -api-key ``` This will result in: ``` 2024/10/04 19:57:40 Workflow result: Nexus Echo 👋 2024/10/04 19:57:40 Workflow result: ¡Hola! Nexus 👋 ``` ## Observability ### Web UI A synchronous Nexus Operation will surface in the caller Workflow as follows, with just `NexusOperationScheduled` and `NexusOperationCompleted` events in the caller's Workflow history: An asynchronous Nexus Operation will surface in the caller Workflow as follows, with `NexusOperationScheduled`, `NexusOperationStarted`, and `NexusOperationCompleted`, in the caller's Workflow history: ### Temporal CLI Use the `workflow describe` command to show pending Nexus Operations in the caller Workflow and any attached callbacks on the handler Workflow: ``` temporal workflow describe -w ``` Nexus events are included in the caller's Workflow history: ``` temporal workflow show -w ``` For **asynchronous Nexus Operations** the following are reported in the caller's history: - `NexusOperationScheduled` - `NexusOperationStarted` - `NexusOperationCompleted` For **synchronous Nexus Operations** the following are reported in the caller's history: - `NexusOperationScheduled` - `NexusOperationCompleted` :::note `NexusOperationStarted` isn't reported in the caller's history for synchronous operations. ::: ## Learn more - Read the high-level description of the [Temporal Nexus feature](/evaluate/nexus) and watch the [Nexus keynote and demo](https://youtu.be/qqc2vsv1mrU?feature=shared&t=2082). - Learn how Nexus works in the [Nexus deep dive talk](https://www.youtube.com/watch?v=izR9dQ_eIe4) and [Encyclopedia](/nexus). - Deploy Nexus Endpoints in production with [Temporal Cloud](/cloud/nexus). --- ## Testing - Go SDK The Testing section of the Temporal Application development guide describes the frameworks that facilitate Workflow and integration testing. In the context of Temporal, you can create these types of automated tests: - **End-to-end:** Running a Temporal Server and Worker with all its Workflows and Activities; starting and interacting with Workflows from a Client. - **Integration:** Anything between end-to-end and unit testing. - Running Activities with mocked Context and other SDK imports (and usually network requests). - Running Workers with mock Activities, and using a Client to start Workflows. - Running Workflows with mocked SDK imports. - **Unit:** Running a piece of Workflow or Activity code (a function or method) and mocking any code it calls. We generally recommend writing the majority of your tests as integration tests. Because the test server supports skipping time, use the test server for both end-to-end and integration tests with Workers. ## Test frameworks {#test-frameworks} The Temporal Go SDK provides a test framework to facilitate testing Workflow implementations. This framework is suited for implementing unit tests as well as functional tests of the Workflow logic. ## Test setup To run unit tests, we first define a test suite struct that absorbs both the basic suite functionality from [testify](https://pkg.go.dev/github.com/stretchr/testify/suite) via `suite.Suite` and the suite functionality from the Temporal test framework via `testsuite.WorkflowTestSuite`. Because every test in this test suite will test our Workflow, we add a property to our struct to hold an instance of the test environment. This allows us to initialize the test environment in a setup method. For testing Workflows, we use a `testsuite.TestWorkflowEnvironment`. ```go type UnitTestSuite struct { suite.Suite testsuite.WorkflowTestSuite env *testsuite.TestWorkflowEnvironment } ``` Next, we implement a `SetupTest` method to set up a new test environment before each test. Doing so ensures that each test runs in its own isolated sandbox. ```go func (s *UnitTestSuite) SetupTest() { s.env = s.NewTestWorkflowEnvironment() } ``` We also implement an `AfterTest` function where we assert that all the mocks we set up were indeed called by invoking `s.env.AssertExpectations(s.T())`. Timeout for the entire test can be set using `SetTestTimeout` in the Workflow or Activity environment. ```go func (s *UnitTestSuite) AfterTest(suiteName, testName string) { s.env.AssertExpectations(s.T()) } ``` Finally, we create a regular test function recognized by the `go test` command, and pass the struct to `suite.Run`. ```go func TestUnitTestSuite(t *testing.T) { suite.Run(t, new(UnitTestSuite)) } ``` ## Testing Activities {#test-activities} An Activity can be tested with a mock Activity environment, which provides a way to mock the Activity context, listen to Heartbeats, and cancel the Activity. This behavior allows you to test the Activity in isolation by calling it directly, without needing to create a Worker to run the Activity. ## Mock and override Activities When running unit tests on Workflows, we want to test the Workflow logic in isolation. Additionally, we want to inject Activity errors during our test runs. The test framework provides two mechanisms that support these scenarios: Activity mocking and Activity overriding. Both of these mechanisms allow you to change the behavior of Activities invoked by your Workflow without the need to modify the actual Workflow code. Let's take a look at a test that simulates a test that fails via the "Activity mocking" mechanism. ```go func (s *UnitTestSuite) Test_SimpleWorkflow_ActivityFails() { s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return( "", errors.New("SimpleActivityFailure")) s.env.ExecuteWorkflow(SimpleWorkflow, "test_failure") s.True(s.env.IsWorkflowCompleted()) err := s.env.GetWorkflowError() s.Error(err) var applicationErr *temporal.ApplicationError s.True(errors.As(err, &applicationErr)) s.Equal("SimpleActivityFailure", applicationErr.Error()) } ``` This test simulates the execution of the Activity `SimpleActivity` that is invoked by our Workflow `SimpleWorkflow` returning an error. We accomplish this by setting up a mock on the test environment for the `SimpleActivity` that returns an error. ```go s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return( "", errors.New("SimpleActivityFailure")) ``` With the mock set up we can now execute the Workflow via the `s.env.ExecuteWorkflow(...)` method and assert that the Workflow completed successfully and returned the expected error. Simply mocking the execution to return a desired value or error is a pretty powerful mechanism to isolate Workflow logic. However, sometimes we want to replace the Activity with an alternate implementation to support a more complex test scenario. Let's assume we want to validate that the Activity gets called with the correct parameters. ```go func (s *UnitTestSuite) Test_SimpleWorkflow_ActivityParamCorrect() { s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return( func(ctx context.Context, value string) (string, error) { s.Equal("test_success", value) return value, nil }) s.env.ExecuteWorkflow(SimpleWorkflow, "test_success") s.True(s.env.IsWorkflowCompleted()) s.NoError(s.env.GetWorkflowError()) } ``` In this example, we provide a function implementation as the parameter to `Return`. This allows us to provide an alternate implementation for the Activity `SimpleActivity`. The framework will execute this function whenever the Activity is invoked and pass on the return value from the function as the result of the Activity invocation. Additionally, the framework will validate that the signature of the "mock" function matches the signature of the original Activity function. Since this can be an entire function, there is no limitation as to what we can do here. In this example, we assert that the `value` param has the same content as the value param we passed to the Workflow. ### Run an Activity {#run-an-activity} If an Activity references its context, you need to mock that context when testing in isolation. ### Listen to Heartbeats {#listen-to-heartbeats} When an Activity sends a Heartbeat, be sure that you can see the Heartbeats in your test code so that you can verify them. ### Cancel an Activity {#cancel-an-activity} If an Activity is supposed to react to a Cancellation, you can test whether it reacts correctly by canceling it. ## Mock and override Nexus operations Mocking Nexus operations lets you test a Workflow that executes Nexus operations without needing a Nexus handler to run the actual Nexus operation. You can mock the Nexus operation or override its implementation with the test Workflow environment. Consider a test that simulates a Nexus operation call. In this example, the Nexus operation is called `sample-operation`, the input type is `SampleInput`, the output type is `SampleOutput`, and it belongs to the Nexus service `sample-service`. The example bellow mocks a call to a Nexus synchronous operation, indicated by the returned value type `*nexus.HandlerStartOperationResultSync[T]`. Since `OnNexusOperation` needs to know the operation's name, input type and output type, and you might not have access to the Nexus operation on the handler side, you can use `nexus.NewOperationReference` to create a Nexus operation reference that represents the operation without its implementation (basically, it represents the signature of the Nexus operation). You may also use the operation itself instead of creating the operation reference if you have it available. ```go func (s *UnitTestSuite) Test_SimpleWorkflow_NexusSyncOperation() { s.env.OnNexusOperation( "sample-service", nexus.NewOperationReference[SampleInput, SampleOutput]("sample-operation"), SampleInput{}, workflow.NexusOperationOptions{}, ).Return( &nexus.HandlerStartOperationResultSync[SampleOutput]{ Value: SampleOutput{}, }, nil, // error if you want to simulate an error in the ExecuteOperation call ) // You can also add a delay to return the mock values by calling After(). // Eg: s.env.OnNexusOperation(...).Return(...).After(1*time.Second) s.env.ExecuteWorkflow(SimpleWorkflow, "test_nexus_operation") s.True(s.env.IsWorkflowCompleted()) s.NoError(s.env.GetWorkflowError()) } ``` Besides the synchronous operations, you can also mock asynchronous operations. The following example demonstrates how to test a Workflow executing a Nexus asynchronous operation. The returned value type in this case must be `*nexus.HandlerStartOperationResultAsync` with an `OperationToken`, which can be any string of your choice. Furthermore, you must call `RegisterNexusAsyncOperationCompletion` to register the result of the asynchronous operation identified by the tuple service name, operation name, and operation token. ```go func (s *UnitTestSuite) Test_SimpleWorkflow_NexusAsyncOperation() { s.env.OnNexusOperation( "sample-service", nexus.NewOperationReference[SampleInput, SampleOutput]("sample-operation"), SampleInput{}, workflow.NexusOperationOptions{}, ).Return( &nexus.HandlerStartOperationResultAsync{ OperationToken: "sample-operation-token", }, nil, // error if you want to simulate an error in the ExecuteOperation call ) err := env.RegisterNexusAsyncOperationCompletion( "sample-service", "sample-operation", "sample-operation-token", // must match the OperationToken above SampleOutput{}, nil, // error if you want to simulate an error in the operation 2*time.Second, // delay to simulate how long the operation takes after it starts ) s.NoError(err) s.env.ExecuteWorkflow(SimpleWorkflow, "test_nexus_operation") s.True(s.env.IsWorkflowCompleted()) s.NoError(s.env.GetWorkflowError()) } ``` If your Workflow executes multiple Nexus asynchronous operations, you can mock each of them with different operation tokens, and register the completion results using the corresponding operation token. If mocking Nexus operations is not enough, and you need to run some custom logic when the Nexus operation is executed, you can override it as follows. ```go func (s *UnitTestSuite) Test_SimpleWorkflow_NexusSyncOperation() { var SampleOperation = nexus.NewSyncOperation( "sample-operation", func(ctx context.Context, input SampleInput, options nexus.StartOperationOptions) (SampleOutput, error) { // Custom logic here. return SampleOutput{}, nil }, ) service := nexus.NewService("sample-service") s.NoError(service.Register(SampleOperation)) env.RegisterNexusService(service) s.env.ExecuteWorkflow(SimpleWorkflow, "test_nexus_operation") s.True(s.env.IsWorkflowCompleted()) s.NoError(s.env.GetWorkflowError()) } ``` The following example shows how to override a Nexus asynchronous operation. ```go func (s *UnitTestSuite) Test_SimpleWorkflow_NexusSyncOperation() { SampleHandlerWorkflow := func(_ workflow.Context, input SampleInput) (SampleOutput, error) { // Custom logic here. return SampleOutput{}, nil } SampleOperation := nexus.NewWorkflowRunOperation( "sample-operation", SampleHandlerWorkflow, func(ctx context.Context, input SampleInput, options nexus.StartOperationOptions) (client.StartWorkflowOptions, error) { // Custom logic to build client.StartWorkflowOptions. return client.StartWorkflowOptions{}, nil }, ) service := nexus.NewService("sample-service") s.NoError(service.Register(SampleOperation)) env.RegisterNexusService(service) s.env.ExecuteWorkflow(SimpleWorkflow, "test_nexus_operation") s.True(s.env.IsWorkflowCompleted()) s.NoError(s.env.GetWorkflowError()) } ``` ## Testing Workflows {#test-workflows} When running unit tests on Workflows, we want to test the Workflow logic in isolation. The simplest test case we can write is to have the test environment execute the Workflow and then evaluate the results. ```go func (s *UnitTestSuite) Test_SimpleWorkflow_Success() { s.env.ExecuteWorkflow(SimpleWorkflow, "test_success") s.True(s.env.IsWorkflowCompleted()) s.NoError(s.env.GetWorkflowError()) } ``` Calling `s.env.ExecuteWorkflow(...)` executes the Workflow logic and any invoked Activities inside the test process. The first parameter of `s.env.ExecuteWorkflow(...)` contains the Workflow functions, and any subsequent parameters contain values for custom input parameters declared by the Workflow. > Note that unless the Activity invocations are mocked or Activity implementation replaced (see [Activity mocking and overriding](#mock-and-override-activities)), the test environment will execute the actual Activity code including any calls to outside services. After executing the Workflow in the above example, we assert that the Workflow ran through completion via the call to `s.env.IsWorkflowCompleted()`. We also assert that no errors were returned by asserting on the return value of `s.env.GetWorkflowError()`. If our Workflow returned a value, we could have retrieved that value via a call to `s.env.GetWorkflowResult(&value)` and had additional asserts on that value. ### Query tests `TestWorkflowEnvironment` instances have a [`QueryWorkflow()` method](https://pkg.go.dev/go.temporal.io/temporal/internal#TestWorkflowEnvironment.QueryWorkflow) that lets you query the state of the currently running Workflow. For example, suppose you have a Workflow that lets you query the progress of a long running task as shown below. ```go func ProgressWorkflow(ctx workflow.Context, percent int) error { logger := workflow.GetLogger(ctx) err := workflow.SetQueryHandler(ctx, "getProgress", func(input []byte) (int, error) { return percent, nil }) if err != nil { logger.Info("SetQueryHandler failed.", "Error", err) return err } for percent = 0; percent<100; percent++ { // Important! Use `workflow.Sleep()`, not `time.Sleep()`, because Temporal's // test environment doesn't stub out `time.Sleep()`. workflow.Sleep(ctx, time.Second*1) } return nil } ``` This Workflow tracks the current progress of a task in percentage terms, and increments the percentage by 1 every second. Below is how you would write a test case that queries this Workflow. Note that you should always query the Workflow either after `ExecuteWorkflow()` is done or in a `RegisterDelayedCallback()` callback, otherwise you'll get a `runtime error` panic. ```go func (s *UnitTestSuite) Test_ProgressWorkflow() { value := 0 // After 10 seconds plus padding, progress should be 10. // Note that `RegisterDelayedCallback()` doesn't actually make your test wait for 10 seconds! // Temporal's test framework advances time internally, so this test should take < 1 second. s.env.RegisterDelayedCallback(func() { res, err := s.env.QueryWorkflow("getProgress") s.NoError(err) err = res.Get(&value) s.NoError(err) s.Equal(10, value) }, time.Second*10+time.Millisecond*1) s.env.ExecuteWorkflow(ProgressWorkflow, 0) s.True(s.env.IsWorkflowCompleted()) // Once the workflow is completed, progress should always be 100 res, err := s.env.QueryWorkflow("getProgress") s.NoError(err) err = res.Get(&value) s.NoError(err) s.Equal(value, 100) } ``` :::note `RegisterDelayedCallback` can also be used to send [Signals](/sending-messages#sending-signals). When using "Signal-With-Start", set the delay to `0`. ::: ### How to mock Activities {#mock-activities} Mock the Activity invocation when unit testing your Workflows. When integration testing Workflows with a Worker, you can mock Activities by providing mock Activity implementations to the Worker. ### How to skip time {#skip-time} Some long-running Workflows can persist for months or even years. Implementing the test framework allows your Workflow code to skip time and complete your tests in seconds rather than the Workflow's specified amount. For example, if you have a Workflow sleep for a day, or have an Activity failure with a long retry interval, you don't need to wait the entire length of the sleep period to test whether the sleep function works. Instead, test the logic that happens after the sleep by skipping forward in time and complete your tests in a timely manner. The test framework included in most SDKs is an in-memory implementation of Temporal Server that supports skipping time. Time is a global property of an instance of `TestWorkflowEnvironment`: skipping time (either automatically or manually) applies to all currently running tests. If you need different time behaviors for different tests, run your tests in a series or with separate instances of the test server. For example, you could run all tests with automatic time skipping in parallel, and then all tests with manual time skipping in series, and then all tests without time skipping in parallel. #### Set up time skipping {#setting-up} Set up the time-skipping test framework in the SDK of your choice. {/* #### Skip time automatically {#automatic-method} */} You can skip time automatically in the SDK of your choice. Start a test server process that skips time as needed. For example, in the time-skipping mode, Timers, which include sleeps and conditional timeouts, are fast-forwarded except when Activities are running. {/* #### Skip time manually {#manual-method} */} Skip time manually in the SDK of your choice. #### Skip time in Activities {#skip-time-in-activities} Skip time in Activities in the SDK of your choice. ## How to Replay a Workflow Execution {#replay} Replay recreates the exact state of a Workflow Execution. You can replay a Workflow from the beginning of its Event History. Replay succeeds only if the [Workflow Definition](/workflow-definition) is compatible with the provided history from a deterministic point of view. When you test changes to your Workflow Definitions, we recommend doing the following as part of your CI checks: 1. Determine which Workflow Types or Task Queues (or both) will be targeted by the Worker code under test. 2. Download the Event Histories of a representative set of recent open and closed Workflows from each Task Queue, either programmatically using the SDK client or via the Temporal CLI. 3. Run the Event Histories through replay. 4. Fail CI if any error is encountered during replay. The following are examples of fetching and replaying Event Histories: Use the [worker.WorkflowReplayer](https://pkg.go.dev/go.temporal.io/sdk/worker#WorkflowReplayer) to replay an existing Workflow Execution from its Event History to replicate errors. For example, the following code retrieves the Event History of a Workflow: ```go "context" "go.temporal.io/api/enums/v1" "go.temporal.io/api/history/v1" "go.temporal.io/sdk/client" ) func GetWorkflowHistory(ctx context.Context, client client.Client, id, runID string) (*history.History, error) { var hist history.History iter := client.GetWorkflowHistory(ctx, id, runID, false, enums.HISTORY_EVENT_FILTER_TYPE_ALL_EVENT) for iter.HasNext() { event, err := iter.Next() if err != nil { return nil, err } hist.Events = append(hist.Events, event) } return &hist, nil } ``` This history can then be used to _replay_. For example, the following code creates a `WorkflowReplayer` and register the `YourWorkflow` Workflow function. Then it calls the `ReplayWorkflowHistory` to _replay_ the Event History and return an error code. ```go "context" "go.temporal.io/sdk/client" "go.temporal.io/sdk/worker" ) func ReplayWorkflow(ctx context.Context, client client.Client, id, runID string) error { hist, err := GetWorkflowHistory(ctx, client, id, runID) if err != nil { return err } replayer := worker.NewWorkflowReplayer() replayer.RegisterWorkflow(YourWorkflow) return replayer.ReplayWorkflowHistory(nil, hist) } ``` The code above will cause the Worker to re-execute the Workflow's Workflow Function using the original Event History. If a noticeably different code path was followed or some code caused a deadlock, it will be returned in the error code. Replaying a Workflow Execution locally is a good way to see exactly what code path was taken for given input and events. You can replay many Event Histories by registering all the needed Workflow implementation and then calling `ReplayWorkflowHistory` repeatedly. --- ## Durable Timers - Go SDK A Workflow can set a Durable Timer for a fixed time period. In some SDKs, the function is called `sleep()`, and in others, it's called `timer()`. A Workflow can sleep for days, months, or even years. Timers are persisted, so even if your Worker or Temporal Service is down when the time period completes, as soon as your Worker and Temporal Service are back up, the `sleep()` call will resolve and your code will continue executing. Sleeping is a resource-light operation: it does not tie up the process, and you can run millions of Timers off a single Worker. To set a Timer in Go, use the [`NewTimer()`](https://pkg.go.dev/go.temporal.io/sdk/workflow#NewTimer) function and pass the duration you want to wait before continuing. ```go timer := workflow.NewTimer(timerCtx, duration) ``` To set a sleep duration in Go, use the [`sleep()`](https://pkg.go.dev/go.temporal.io/sdk/workflow#Sleep) function and pass the duration you want to wait before continuing. A zero or negative sleep duration causes the function to return immediately. ```go sleep = workflow.Sleep(ctx, 10*time.Second) ``` For more information, see the [Timer](https://github.com/temporalio/samples-go/tree/main/timer) example in the [Go Samples repository](https://github.com/temporalio/samples-go). --- ## Versioning - Go SDK Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: - [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). The Worker Versioning feature allows you to tag your Workers and programmatically roll them out in versioned deployments, so that old Workers can run old code paths and new Workers can run new code paths. - [Versioning with Patching](#patching). This method works by adding branches to your code tied to specific revisions. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. :::danger Support for the experimental Worker Versioning method before 2025 will be removed from Temporal Server in March 2026. Refer to the [latest Worker Versioning docs](/worker-versioning) for guidance. You can still refer to the [Worker Versioning Legacy](worker-versioning-legacy) docs if needed. ::: ## Worker Versioning Temporal's [Worker Versioning](/production-deployment/worker-deployments/worker-versioning) feature allows you to tag your Workers and programmatically roll them out in Deployment Versions, so that old Workers can run old code paths and new Workers can run new code paths. This way, you can pin your Workflows to specific revisions, avoiding the need for patching. ## Versioning with Patching {#patching} ### Patching with GetVersion A Patch defines a logical branch in a Workflow for a specific change, similar to a feature flag. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. When you want to make substantive code changes that may affect existing Workflow Executions, create a patch. Note that there's no need to patch [Pinned Workflows](/worker-versioning). Consider the following Workflow Definition: ```go func YourWorkflow(ctx workflow.Context, data string) (string, error) { ao := workflow.ActivityOptions{ ScheduleToStartTimeout: time.Minute, StartToCloseTimeout: time.Minute, } ctx = workflow.WithActivityOptions(ctx, ao) var result1 string err := workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1) if err != nil { return "", err } var result2 string err = workflow.ExecuteActivity(ctx, ActivityB, result1).Get(ctx, &result2) return result2, err } ``` Suppose you replaced `ActivityA` with `ActivityC` and deployed the updated code. If an existing Workflow Execution was started by the original version of the Workflow code, where `ActivityA` was run, and then resumed running on a new Worker where it was replaced with `ActivityC`, the server side Event History would be out of sync. This would cause the Workflow to fail with a nondeterminism error. To resolve this, you can use `workflow.GetVersion()` to patch to your Workflow: ```go var err error v := workflow.GetVersion(ctx, "Step1", workflow.DefaultVersion, 1) if v == workflow.DefaultVersion { err = workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1) } else { err = workflow.ExecuteActivity(ctx, ActivityC, data).Get(ctx, &result1) } if err != nil { return "", err } var result2 string err = workflow.ExecuteActivity(ctx, ActivityB, result1).Get(ctx, &result2) return result2, err ``` When `workflow.GetVersion()` is run for the new Workflow Execution, it records a marker in the Event History so that all future calls to `GetVersion` for this change Id — `Step 1` in the example — on this Workflow Execution will always return the given version number, which is `1` in the example. If you make an additional change, such as replacing ActivityC with ActivityD, you need to add some additional code: ```go v := workflow.GetVersion(ctx, "Step1", workflow.DefaultVersion, 2) if v == workflow.DefaultVersion { err = workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1) } else if v == 1 { err = workflow.ExecuteActivity(ctx, ActivityC, data).Get(ctx, &result1) } else { err = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1) } ``` Note that we changed `maxSupported` from 1 to 2. A Workflow that has already passed this `GetVersion()` call before it was introduced returns `DefaultVersion`. A Workflow that was run with `maxSupported` set to 1 returns 1. New Workflows return 2. After all the Workflow Executions prior to version 1 have left retention, you can remove the code for that version: ```go v := workflow.GetVersion(ctx, "Step1", 1, 2) if v == 1 { err = workflow.ExecuteActivity(ctx, ActivityC, data).Get(ctx, &result1) } else { err = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1) } ``` You'll note that `minSupported` has changed from `DefaultVersion` to `1`. If an older version of the Workflow Execution history is replayed on this code, it fails because the minimum expected version is 1. After all the Workflow Executions for version 1 have left retention, you can remove version 1 so that your code looks like the following: ```go _ := workflow.GetVersion(ctx, "Step1", 2, 2) err = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1) ``` Note that we have preserved the call to `GetVersion()`. There are two reasons to preserve this call: 1. This ensures that if there is a Workflow Execution still running for an older version, it will fail here and not proceed. 2. If you need to make additional changes for `Step1`, such as changing ActivityD to ActivityE, you only need to update `maxVersion` from 2 to 3 and branch from there. You need to preserve only the first call to `GetVersion()` for each `changeID`. All subsequent calls to `GetVersion()` with the same change Id are safe to remove. If necessary, you can remove the first `GetVersion()` call, but you need to ensure the following: - All executions with an older version have left retention. - You can no longer use `Step1` for the changeId. If you need to make changes to that same part in the future, such as change from ActivityD to ActivityE, you would need to use a different changeId like `Step1-fix2`, and start minVersion from DefaultVersion again. The code would look like the following: ```go v := workflow.GetVersion(ctx, "Step1-fix2", workflow.DefaultVersion, 1) if v == workflow.DefaultVersion { err = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1) } else { err = workflow.ExecuteActivity(ctx, ActivityE, data).Get(ctx, &result1) } ``` You can add multiple calls to `GetVersion` in a single Workflow. This can become challenging to manage if you have many long-running Workflows, as you will wind up with many code branches over time. To clean these up, you can gradually deprecate older Workflow versions. ### Deprecating old Workflow versions You can safely remove support for older Workflow versions once you are certain that there are no longer any open Workflow Executions based on that version. You can use the following [List Filter](/list-filter) syntax for this (the 1 near the end of the last line represents the version number): ``` WorkflowType = "PizzaWorkflow" AND ExecutionStatus = "Running" AND TemporalChangeVersion="ChangedNotificationActivityType-1" ``` Since Workflow Executions that were started before `GetVersion` was added to the code won't have the associated Marker in their Event History, you'll need to use a different query to determine if any of those are still running: ``` WorkflowType = "PizzaWorkflow" AND ExecutionStatus = "Running" AND TemporalChangeVersion IS NULL ``` If you have found that there are no longer any open executions for the first two versions of the Workflow, for example, then you could remove support for them by changing the code as shown below: ```go version := GetVersion(ctx, "ChangedNotificationActivityType", 2, 3) if version == 2 { err = workflow.ExecuteActivity(ctx, SendTextMessage).Get(ctx, nil) } else { err = workflow.ExecuteActivity(ctx, SendTweet).Get(ctx, nil) } ``` Patching allows you to make changes to currently running Workflows. It is a powerful method for introducing compatible changes without introducing non-determinism errors. ### Workflow cutovers To understand why Patching is useful, it's helpful to demonstrate cutting over an entire Workflow. Since incompatible changes only affect open Workflow Executions of the same type, you can avoid determinism errors by creating a whole new Workflow when making changes. To do this, you can copy the Workflow Definition function, giving it a different name, and register both names with your Workers. For example, you would duplicate `PizzaWorkflow` as `PizzaWorkflowV2`: ```go func PizzaWorkflow(ctx workflow.Context, order PizzaOrder) (OrderConfirmation, error) { // this function contains the original code } func PizzaWorkflowV2(ctx workflow.Context, order PizzaOrder) (OrderConfirmation, error) { // this function contains the updated code } ``` You can use any name you like for the new function, so long as the first character remains uppercase (this is a requirement for any Workflow Definition, since it must use an exported function). Using some type of version identifier, such as V2 in this example, will make it easier to identify the change. You would then need to update the Worker configuration, and any other identifier strings, to register both Workflow Types: ```go w.RegisterWorkflow(pizza.PizzaWorkflow) w.RegisterWorkflow(pizza.PizzaWorkflowV2) ``` The downside of this method is that it requires you to duplicate code and to update any commands used to start the Workflow. This can become impractical over time. This method also does not provide a way to version any still-running Workflows -- it is essentially just a cutover, unlike Patching. ## Runtime checking {#runtime-checking} The Temporal Go SDK performs a runtime check to help prevent obvious incompatible changes. Adding, removing, or reordering any of these methods without Versioning triggers the runtime check and results in a nondeterminism error: - `workflow.ExecuteActivity()` - `workflow.ExecuteChildWorkflow()` - `workflow.NewTimer()` - `workflow.RequestCancelWorkflow()` - `workflow.SideEffect()` - `workflow.SignalExternalWorkflow()` - `workflow.Sleep()` The runtime check does not perform a thorough check. For example, it does not check on the Activity's input arguments or the Timer duration. Each Temporal SDK implements these sanity checks differently, and they are not a complete check for non-deterministic changes. Instead, you should incorporate [Replay Testing](/develop/go/testing-suite#replay) when making revisions. --- ## Worker Versioning (Legacy) - Go SDK ## How to use Worker Versioning in Go (Deprecated) {#worker-versioning} :::caution This section is for a deprecated Worker Versioning API. Please redirect your attention to [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). See the [Pre-release README](https://github.com/temporalio/temporal/blob/main/docs/worker-versioning.md) for more information. ::: A Build ID corresponds to a deployment. If you don't already have one, we recommend a hash of the code--such as a Git SHA--combined with a human-readable timestamp. To use Worker Versioning, you need to pass a Build ID to your Go Worker and opt in to Worker Versioning. ### Assign a Build ID to your Worker and opt in to Worker Versioning You should understand assignment rules before completing this step. See the [Worker Versioning Pre-release README](https://github.com/temporalio/temporal/blob/main/docs/worker-versioning.md) for more information. To enable Worker Versioning for your Worker, assign the Build ID--perhaps from an environment variable--and turn it on. ```go // ... workerOptions := worker.Options{ BuildID: buildID, UseBuildIDForVersioning: true, // ... } w := worker.New(c, "your_task_queue_name", workerOptions) // ... ``` :::warning Importantly, when you start this Worker, it won't receive any tasks until you set up assignment rules. ::: ### Specify versions for Activities, Child Workflows, and Continue-as-New Workflows By default, Activities, Child Workflows, and Continue-as-New Workflows are run on the build of the Workflow that created them if they are also configured to run on the same Task Queue. When configured to run on a separate Task Queue, they will default to using the current assignment rules. If you want to override this behavior, you can specify your intent via the `VersioningIntent` field on the appropriate options struct. For example, if you want an Activity to use the latest assignment rules rather than inheriting from its parent: ```go // ... ao := workflow.ActivityOptions{ VersioningIntent: VersioningIntentUseAssignmentRules, // ...other options } activityCtx := workflow.WithActivityOptions(ctx, ao) var yourActivityResult YourActivityResultType err := workflow.ExecuteActivity(ctx, YourActivityDefinition, yourActivityParam).Get(ctx, &yourActivityResult) // ... ``` #### Specifying versions for Continue-As-New When using the Continue-As-New feature, use the `WithWorkflowVersioningIntent` context modifier: ```go ctx = workflow.WithWorkflowVersioningIntent(ctx, temporal.VersioningIntentUseAssignmentRules) err := workflow.NewContinueAsNewError(ctx, "WorkflowName") ``` ### Tell the Task Queue about your Worker's Build ID (Deprecated) :::caution This section is for a deprecated Worker Versioning API. Please redirect your attention to [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). ::: Now you can use the SDK (or the Temporal CLI) to tell the Task Queue about your Worker's Build ID. You might want to do this as part of your CI deployment process. ```go // ... err := client.UpdateWorkerBuildIdCompatibility(ctx, &client.UpdateWorkerBuildIdCompatibilityOptions{ TaskQueue: "your_task_queue_name", Operation: &client.BuildIDOpAddNewIDInNewDefaultSet{ BuildID: "deadbeef", }, }) ``` This code adds the `deadbeef` Build ID to the Task Queue as the sole version in a new version set, which becomes the default for the queue. New Workflows execute on Workers with this Build ID, and existing ones will continue to process by appropriately compatible Workers. If, instead, you want to add the Build ID to an existing compatible set, you can do this: ```go // ... err := client.UpdateWorkerBuildIdCompatibility(ctx, &client.UpdateWorkerBuildIdCompatibilityOptions{ TaskQueue: "your_task_queue_name", Operation: &client.BuildIDOpAddNewCompatibleVersion{ BuildID: "deadbeef", ExistingCompatibleBuildId: "some-existing-build-id", }, }) ``` This code adds `deadbeef` to the existing compatible set containing `some-existing-build-id` and marks it as the new default Build ID for that set. You can also promote an existing Build ID in a set to be the default for that set: ```go // ... err := client.UpdateWorkerBuildIdCompatibility(ctx, &client.UpdateWorkerBuildIdCompatibilityOptions{ TaskQueue: "your_task_queue_name", Operation: &client.BuildIDPromoteIDWithinSet{ BuildID: "some-existing-build-id", }, }) ``` --- ## Develop durable applications with Temporal SDKs The Temporal SDK developer guides provide a comprehensive overview of the structures, primitives, and features used in [Temporal Application](/temporal#temporal-application) development. - Go SDK [developer guide](/develop/go) and [API reference](http://t.mp/go-api) - Java SDK [developer guide](/develop/java) and [API reference](http://t.mp/java-api) - PHP SDK [developer guide](/develop/php) and [API reference](https://php.temporal.io/namespaces/temporal.html) - Python SDK [developer guide](/develop/python) and [API reference](https://python.temporal.io) - TypeScript SDK [developer guide](/develop/typescript) and [API reference](https://typescript.temporal.io) - .NET SDK [developer guide](/develop/dotnet) and [API reference](https://dotnet.temporal.io/) - Ruby SDK [developer guide](/develop/ruby) and [API reference](https://ruby.temporal.io/) --- ## Asynchronous Activity Completion - Java SDK This page shows how to asynchronously complete an Activity [Asynchronous Activity Completion](/activity-execution#asynchronous-activity-completion) enables the Activity Function to return without the Activity Execution completing. There are three steps to follow: 1. The Activity provides the external system with identifying information needed to complete the Activity Execution. Identifying information can be a [Task Token](/activity-execution#task-token), or a combination of Namespace, Workflow Id, and Activity Id. 2. The Activity Function completes in a way that identifies it as waiting to be completed by an external system. 3. The Temporal Client is used to Heartbeat and complete the Activity. To complete an Activity asynchronously, set the [`ActivityCompletionClient`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/ActivityCompletionClient.html) interface to the `complete()` method. ```java @Override public String composeGreeting(String greeting, String name) { // Get the activity execution context ActivityExecutionContext context = Activity.getExecutionContext(); // Set a correlation token that can be used to complete the activity asynchronously byte[] taskToken = context.getTaskToken(); /** * For the example we will use a {@link java.util.concurrent.ForkJoinPool} to execute our * activity. In real-life applications this could be any service. The composeGreetingAsync * method is the one that will actually complete workflow action execution. */ ForkJoinPool.commonPool().execute(() -> composeGreetingAsync(taskToken, greeting, name)); context.doNotCompleteOnReturn(); // Since we have set doNotCompleteOnReturn(), the workflow action method return value is // ignored. return "ignored"; } // Method that will complete action execution using the defined ActivityCompletionClient private void composeGreetingAsync(byte[] taskToken, String greeting, String name) { String result = greeting + " " + name + "!"; // Complete our workflow activity using ActivityCompletionClient completionClient.complete(taskToken, result); } } ``` Alternatively, set the [`doNotCompleteOnReturn()`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/ActivityExecutionContext.html#doNotCompleteOnReturn()) method during an Activity Execution. ```java @Override public String composeGreeting(String greeting, String name) { // Get the activity execution context ActivityExecutionContext context = Activity.getExecutionContext(); // Set a correlation token that can be used to complete the activity asynchronously byte[] taskToken = context.getTaskToken(); /** * For the example we will use a {@link java.util.concurrent.ForkJoinPool} to execute our * activity. In real-life applications this could be any service. The composeGreetingAsync * method is the one that will actually complete workflow action execution. */ ForkJoinPool.commonPool().execute(() -> composeGreetingAsync(taskToken, greeting, name)); context.doNotCompleteOnReturn(); // Since we have set doNotCompleteOnReturn(), the workflow action method return value is // ignored. return "ignored"; } ``` When this method is called during an Activity Execution, the Activity Execution does not complete when its method returns. --- ## Benign exceptions - Java SDK **How to mark an Activity error as benign using the Temporal Java SDK** When Activities throw errors that are expected or not severe, they can create noise in your logs, metrics, and OpenTelemetry traces, making it harder to identify real issues. By marking these errors as benign, you can exclude them from your observability data while still handling them in your Workflow logic. To mark an error as benign, set the category to `ApplicationErrorCategory.BENIGN` using the [`ApplicationFailure`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/failure/ApplicationFailure.html) builder. Benign errors: - Have Activity failure logs downgraded to DEBUG level - Do not emit Activity failure metrics - Do not set the OpenTelemetry failure status to ERROR ```java @ActivityInterface public interface MyActivities { @ActivityMethod String myActivity(); } public class MyActivitiesImpl implements MyActivities { @Override public String myActivity() { try { return callExternalService(); } catch (Exception e) { // Mark this error as benign since it's expected throw ApplicationFailure.newBuilder() .setMessage(e.getMessage()) .setType(e.getClass().getName()) .setCause(e) .setCategory(ApplicationErrorCategory.BENIGN) .build(); } } } ``` Use benign exceptions for Activity errors that occur regularly as part of normal operations, such as polling an external service that isn't ready yet, or handling expected transient failures that will be retried. --- ## Interrupt a Workflow Execution - Java SDK You can interrupt a Workflow Execution in one of the following ways: - [Cancel](#cancellation): Canceling a Workflow provides a graceful way to stop Workflow Execution. - [Terminate](#termination): Terminating a Workflow forcefully stops Workflow Execution. Terminating a Workflow forcefully stops Workflow Execution. This action resembles killing a process. - The system records a `WorkflowExecutionTerminated` event in the Workflow History. - The termination forcefully and immediately stops the Workflow Execution. - The Workflow code gets no chance to handle termination. - A Workflow Task doesn't get scheduled. In most cases, canceling is preferable because it allows the Workflow to finish gracefully. Terminate only if the Workflow is stuck and cannot be canceled normally. ## Cancel a Workflow Execution {#cancellation} Canceling a Workflow provides a graceful way to stop Workflow Execution. This action resembles sending a `SIGTERM` to a process. - The system records a `WorkflowExecutionCancelRequested` event in the Workflow History. - A Workflow Task gets scheduled to process the cancelation. - The Workflow code can handle the cancelation and execute any cleanup logic. - The system doesn't forcefully stop the Workflow. To cancel a Workflow Execution in Java, use the [cancel()]() function on the WorkflowStub. ```java WorkflowStub workflowStub = WorkflowStub.fromTyped(workflow); workflowStub.cancel(); ``` ## Cancellation scopes in Java {#cancellation-scopes} In the Java SDK, Workflows are represented internally by a tree of cancellation scopes, each with cancellation behaviors you can specify. By default, everything runs in the "root" scope. Scopes are created using the [Workflow.newCancellationScope]() constructor Cancellations are applied to cancellation scopes, which can encompass an entire Workflow or just part of one. Scopes can be nested, and cancellation propagates from outer scopes to inner ones. A Workflow's method runs in the outermost scope. Cancellations are handled by catching `CanceledFailure`s thrown by cancelable operations. You can also use the following APIs: - `CancellationScope.current()`: Get the current scope. - `scope.cancel()`: Cancel all operations inside a `scope`. - `scope.getCancellationRequest()`: A promise that resolves when a scope cancellation is requested, such as when Workflow code calls `cancel()` or the entire Workflow is cancelled by an external client. When a `CancellationScope` is cancelled, it propagates cancellation in any child scopes and of any cancelable operations created within it, such as the following: - Activities - Timers (created with the [sleep]() function) - Child Workflows - Nexus Operations ### Cancel an Activity from a Workflow {#cancel-activity} Canceling an Activity from within a Workflow requires that the Activity Execution sends Heartbeats and sets a Heartbeat Timeout. If the Heartbeat is not invoked, the Activity cannot receive a cancellation request. When any non-immediate Activity is executed, the Activity Execution should send Heartbeats and set a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) to ensure that the server knows it is still working. When an Activity is canceled, an error is raised in the Activity at the next available opportunity. If cleanup logic needs to be performed, it can be done in a `finally` clause or inside a caught cancel error. However, for the Activity to appear canceled the exception needs to be re-raised. :::note Unlike regular Activities, [Local Activities](/local-activity) currently do not support cancellation. ::: To cancel an Activity from a Workflow Execution, call the [cancel()]() method on the CancellationScope that the activity was started in. ```java public class GreetingWorkflowImpl implements GreetingWorkflow { @Override public String getGreeting(String name) { List> results = new ArrayList<>(greetings.length); /* * Create our CancellationScope. Within this scope we call the workflow activity * composeGreeting method asynchronously for each of our defined greetings in different * languages. */ CancellationScope scope = Workflow.newCancellationScope( () -> { for (String greeting : greetings) { results.add(Async.function(activities::composeGreeting, greeting, name)); } }); /* * Execute all activities within the CancellationScope. Note that this execution is * non-blocking as the code inside our cancellation scope is also non-blocking. */ scope.run(); // We use "anyOf" here to wait for one of the activity invocations to return String result = Promise.anyOf(results).get(); // Trigger cancellation of all uncompleted activity invocations within the cancellation scope scope.cancel(); /* * Wait for all activities to perform cleanup if needed. * For the sake of the example we ignore cancellations and * get all the results so that we can print them in the end. * * Note that we cannot use "allOf" here as that fails on any Promise failures */ for (Promise activityResult : results) { try { activityResult.get(); } catch (ActivityFailure e) { if (!(e.getCause() instanceof CanceledFailure)) { throw e; } } } return result; } } ``` ## Terminate a Workflow Execution {#termination} Terminating a Workflow forcefully stops Workflow Execution. This action resembles killing a process. - The system records a `WorkflowExecutionTerminated` event in the Workflow History. - The termination forcefully and immediately stops the Workflow Execution. - The Workflow code gets no chance to handle termination. - A Workflow Task doesn't get scheduled. To terminate a Workflow Execution in Java, use the [terminate()]() function on the WorkflowStub. ```java WorkflowStub untyped = WorkflowStub.fromTyped(myWorkflowStub); untyped.terminate("Sample reason"); ``` ## Reset a Workflow Execution {#reset} Resetting a Workflow Execution terminates the current Workflow Execution and starts a new Workflow Execution from a point you specify in its Event History. Use reset when a Workflow is blocked due to a non-deterministic error or other issues that prevent it from completing. When you reset a Workflow, the Event History up to the reset point is copied to the new Workflow Execution, and the Workflow resumes from that point with the current code. Reset only works if you've fixed the underlying issue, such as removing non-deterministic code. Any progress made after the reset point will be discarded. Provide a reason when resetting, as it will be recorded in the Event History. 1. Navigate to the Workflow Execution details page, 2. Click the **Reset** button in the top right dropdown menu, 3. Select the Event ID to reset to, 4. Provide a reason for the reset, 5. Confirm the reset. The Web UI shows available reset points and creates a link to the new Workflow Execution after the reset completes. Use the `temporal workflow reset` command to reset a Workflow Execution: ```bash temporal workflow reset \ --workflow-id \ --event-id \ --reason "Reason for reset" ``` For example: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" ``` By default, the command resets the latest Workflow Execution in the `default` Namespace. Use `--run-id` to reset a specific run. Use `--namespace` to specify a different Namespace: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" \ --namespace my-namespace \ --tls-cert-path /path/to/cert.pem \ --tls-key-path /path/to/key.pem ``` Monitor the new Workflow Execution after resetting to ensure it completes successfully. --- ## Child Workflows - Java SDK This page shows how to do the following: - [Start a Child Workflow Execution](#start-child-workflow) - [Set a Parent Close Policy](#parent-close-policy) ## Start a Child Workflow Execution {#start-child-workflow} **How to start a Child Workflow Execution using the Java SDK.** A [Child Workflow Execution](/child-workflows) is a Workflow Execution that is scheduled from within another Workflow using a Child Workflow API. When using a Child Workflow API, Child Workflow related Events ([StartChildWorkflowExecutionInitiated](/references/events#startchildworkflowexecutioninitiated), [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted), [ChildWorkflowExecutionCompleted](/references/events#childworkflowexecutioncompleted), etc...) are logged in the Workflow Execution Event History. Always block progress until the [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted) Event is logged to the Event History to ensure the Child Workflow Execution has started. After that, Child Workflow Executions may be abandoned using the _Abandon_ [Parent Close Policy](/parent-close-policy) set in the Child Workflow Options. To be sure that the Child Workflow Execution has started, first call the Child Workflow Execution method on the instance of Child Workflow future, which returns a different future. Then get the value of an object that acts as a proxy for a result that is initially unknown, which is what waits until the Child Workflow Execution has spawned. ### Async Child Workflows The first call to the Child Workflow stub must always be its Workflow method (method annotated with `@WorkflowMethod`). Similar to Activities, invoking Child Workflow methods can be made synchronous or asynchronous by using `Async#function` or `Async#procedure`. The synchronous call blocks until a Child Workflow method completes. The asynchronous call returns a `Promise` which can be used to wait for the completion of the Child Workflow method, as in the following example: ```java GreetingChild child = Workflow.newChildWorkflowStub(GreetingChild.class); Promise greeting = Async.function(child::composeGreeting, "Hello", name); // ... greeting.get() ``` To execute an untyped Child Workflow asynchronously, call `executeAsync` on the `ChildWorkflowStub`, as shown in the following example. ```java //... ChildWorkflowStub childUntyped = Workflow.newUntypedChildWorkflowStub( "GreetingChild", // your workflow type ChildWorkflowOptions.newBuilder().setWorkflowId("childWorkflow").build()); Promise greeting = childUntyped.executeAsync(String.class, String.class, "Hello", name); String result = greeting.get(); //... ``` The following examples show how to spawn a Child Workflow: - Spawn a Child Workflow from a Workflow ```java // Child Workflow interface @WorkflowInterface public interface GreetingChild { @WorkflowMethod String composeGreeting(String greeting, String name); } // Child Workflow implementation not shown // Parent Workflow implementation public class GreetingWorkflowImpl implements GreetingWorkflow { @Override public String getGreeting(String name) { GreetingChild child = Workflow.newChildWorkflowStub(GreetingChild.class); // This is a blocking call that returns only after child has completed. return child.composeGreeting("Hello", name ); } } ``` - Spawn two Child Workflows (with the same type) in parallel: ```java // Parent Workflow implementation public class GreetingWorkflowImpl implements GreetingWorkflow { @Override public String getGreeting(String name) { // Workflows are stateful, so a new stub must be created for each new child. GreetingChild child1 = Workflow.newChildWorkflowStub(GreetingChild.class); Promise greeting1 = Async.function(child1::composeGreeting, "Hello", name); // Both children will run concurrently. GreetingChild child2 = Workflow.newChildWorkflowStub(GreetingChild.class); Promise greeting2 = Async.function(child2::composeGreeting, "Bye", name); // Do something else here. ... return "First: " + greeting1.get() + ", second: " + greeting2.get(); } } ``` - Send a Signal to a Child Workflow from the parent: ```java // Child Workflow interface @WorkflowInterface public interface GreetingChild { @WorkflowMethod String composeGreeting(String greeting, String name); @SignalMethod void updateName(String name); } // Parent Workflow implementation public class GreetingWorkflowImpl implements GreetingWorkflow { @Override public String getGreeting(String name) { GreetingChild child = Workflow.newChildWorkflowStub(GreetingChild.class); Promise greeting = Async.function(child::composeGreeting, "Hello", name); child.updateName("Temporal"); return greeting.get(); } } ``` - Sending a Query to Child Workflows from within the parent Workflow code is not supported. However, you can send a Query to Child Workflows from Activities using `WorkflowClient`. Related reads: - [How to develop a Workflow Definition](/develop/java/core-application#develop-workflows) - Java Workflow reference: [https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/package-summary.html](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/package-summary.html) ## Parent Close Policy {#parent-close-policy} **How to set a Parent Close Policy for a Child Workflow using the Java SDK.** A [Parent Close Policy](/parent-close-policy) determines what happens to a Child Workflow Execution if its Parent changes to a Closed status (Completed, Failed, or Timed Out). The default Parent Close Policy option is set to terminate the Child Workflow Execution. Set [Parent Close Policy](/parent-close-policy) on an instance of `ChildWorkflowOptions` using [`ChildWorkflowOptions.newBuilder().setParentClosePolicy`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/ChildWorkflowOptions.Builder.html). - Type: `ChildWorkflowOptions.Builder` - Default: `PARENT_CLOSE_POLICY_TERMINATE` ```java public void parentWorkflow() { ChildWorkflowOptions options = ChildWorkflowOptions.newBuilder() .setParentClosePolicy(ParentClosePolicy.PARENT_CLOSE_POLICY_ABANDON) .build(); MyChildWorkflow child = Workflow.newChildWorkflowStub(MyChildWorkflow.class, options); Async.procedure(child::, ...); Promise childExecution = Workflow.getWorkflowExecution(child); // Wait for child to start childExecution.get() } ``` In this example, we are: 1. Setting `ChildWorkflowOptions.ParentClosePolicy` to `ABANDON` when creating a Child Workflow stub. 2. Starting Child Workflow Execution asynchronously using `Async.function` or `Async.procedure`. 3. Calling `Workflow.getWorkflowExecution(…)` on the child stub. 4. Waiting for the `Promise` returned by `getWorkflowExecution` to complete. This indicates whether the Child Workflow started successfully (or failed). 5. Completing parent Workflow Execution asynchronously. Steps 3 and 4 are needed to ensure that a Child Workflow Execution starts before the parent closes. If the parent initiates a Child Workflow Execution and then completes immediately after, the Child Workflow will never execute. --- ## Continue-As-New - Java SDK This page answers the following questions for Java developers: - [What is Continue-As-New?](#what) - [How to Continue-As-New?](#how) - [When is it right to Continue-as-New?](#when) - [How to test Continue-as-New?](#how-to-test) ## What is Continue-As-New? {#what} [Continue-As-New](/workflow-execution/continue-as-new) lets a Workflow Execution close successfully and creates a new Workflow Execution. You can think of it as a checkpoint when your Workflow gets too long or approaches certain scaling limits. The new Workflow Execution is in the same [chain](/workflow-execution#workflow-execution-chain); it keeps the same Workflow Id but gets a new Run Id and a fresh Event History. It also receives your Workflow's usual parameters. ## How to Continue-As-New using the Java SDK {#how} First, design your Workflow parameters so that you can pass in the "current state" when you Continue-As-New into the next Workflow run. This state is typically set to `None` for the original caller of the Workflow. View the source code {' '} in the context of the rest of the application code. ```java class ClusterManagerInput { private final Optional state; private final boolean testContinueAsNew; } @WorkflowMethod ClusterManagerResult run(ClusterManagerInput input); ```` The test hook in the above snippet is covered [below](#how-to-test). Inside your Workflow, call the [`continueAsNew()`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#continueAsNew(io.temporal.workflow.ContinueAsNewOptions,java.lang.Object...)) function with the same type. This stops the Workflow right away and starts a new one. View the source code {' '} in the context of the rest of the application code. ```java Workflow.continueAsNew( new ClusterManagerInput(Optional.of(state), input.isTestContinueAsNew())); ```` ### Considerations for Workflows with Message Handlers {#with-message-handlers} If you use Updates or Signals, don't call Continue-as-New from the handlers. Instead, wait for your handlers to finish in your main Workflow before you run `continueAsNew`. ## When is it right to Continue-as-New using the Java SDK? {#when} Use Continue-as-New when your Workflow might hit [Event History Limits](/workflow-execution/event#event-history). Temporal tracks your Workflow's progress against these limits to let you know when you should Continue-as-New. Call `Workflow.getInfo().isContinueAsNewSuggested()` to check if it's time. ## How to test Continue-as-New using the Java SDK {#how-to-test} Testing Workflows that naturally Continue-as-New may be time-consuming and resource-intensive. Instead, add a test hook to check your Workflow's Continue-as-New behavior faster in automated tests. For example, when `testContinueAsNew == true`, this sample creates a test-only variable called `maxHistoryLength` and sets it to a small value. A helper method in the Workflow checks it each time it considers using Continue-as-New: View the source code {' '} in the context of the rest of the application code. ```java private boolean shouldContinueAsNew() { if (Workflow.getInfo().isContinueAsNewSuggested()) { return true; } // This is just for ease-of-testing. In production, we trust temporal to tell us when to // continue as new. if (maxHistoryLength > 0 && Workflow.getInfo().getHistoryLength() > maxHistoryLength) { return true; } return false; } ``` --- ## Converters and encryption - Java SDK Temporal's security model is designed around client-side encryption of Payloads. A client may encrypt Payloads before sending them to the server, and decrypt them after receiving them from the server. This provides a high degree of confidentiality because the Temporal Server itself has absolutely no knowledge of the actual data. It also gives implementers more power and more freedom regarding which client is able to read which data -- they can control access with keys, algorithms, or other security measures. A Temporal developer adds client-side encryption of Payloads by providing a Custom Payload Codec to its Client. Depending on business needs, a complete implementation of Payload Encryption may involve selecting appropriate encryption algorithms, managing encryption keys, restricting a subset of their users from viewing payload output, or a combination of these. The server itself never adds encryption over Payloads. Therefore, unless client-side encryption is implemented, Payload data will be persisted in non-encrypted form to the data store, and any Client that can make requests to a Temporal namespace (including the Temporal UI and CLI) will be able to read Payloads contained in Workflows. When working with sensitive data, you should always implement Payload encryption. ## Custom Payload Codec in Java {#custom-payload-codec} **How to create a custom Payload Codec using the Java SDK.** Create a custom implementation of [`PayloadCodec`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/payload/codec/PayloadCodec.html) and use it in [`CodecDataConverter`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/common/converter/CodecDataConverter.html) to set a custom Data Converter. The Payload Codec does byte-to-byte conversion and must be set with a Data Converter. Define custom encryption/compression logic in your `encode` method and decryption/decompression logic in your `decode` method. The following example from the [Java encryption sample](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/encryptedpayloads/CryptCodec.java) shows how to implement encryption and decryption logic on your payloads in your `encode` and `decode` methods. ```java class YourCustomPayloadCodec implements PayloadCodec { static final ByteString METADATA_ENCODING = ByteString.copyFrom("binary/encrypted", StandardCharsets.UTF_8); private static final String CIPHER = "AES/GCM/NoPadding"; // Define constants that you can add to your encoded Payload to create a new Payload. static final String METADATA_ENCRYPTION_CIPHER_KEY = "encryption-cipher"; static final ByteString METADATA_ENCRYPTION_CIPHER = ByteString.copyFrom(CIPHER, StandardCharsets.UTF_8); static final String METADATA_ENCRYPTION_KEY_ID_KEY = "encryption-key-id"; private static final Charset UTF_8 = StandardCharsets.UTF_8; // See the linked sample for details on the methods called here. @NotNull @Override public List encode(@NotNull List payloads) { return payloads.stream().map(this::encodePayload).collect(Collectors.toList()); } @NotNull @Override public List decode(@NotNull List payloads) { return payloads.stream().map(this::decodePayload).collect(Collectors.toList()); } private Payload encodePayload(Payload payload) { String keyId = getKeyId(); SecretKey key = getKey(keyId); byte[] encryptedData; try { encryptedData = encrypt(payload.toByteArray(), key); // The encrypt method contains your custom encryption logic. } catch (Throwable e) { throw new DataConverterException(e); } // Apply metadata to the encoded Payload that you can verify in your decode method before decoding. // See the sample for details on the metadata values set. return Payload.newBuilder() .putMetadata(EncodingKeys.METADATA_ENCODING_KEY, METADATA_ENCODING) .putMetadata(METADATA_ENCRYPTION_CIPHER_KEY, METADATA_ENCRYPTION_CIPHER) .putMetadata(METADATA_ENCRYPTION_KEY_ID_KEY, ByteString.copyFromUtf8(keyId)) .setData(ByteString.copyFrom(encryptedData)) .build(); } private Payload decodePayload(Payload payload) { // Verify the incoming encoded Payload metadata before applying decryption. if (METADATA_ENCODING.equals( payload.getMetadataOrDefault(EncodingKeys.METADATA_ENCODING_KEY, null))) { String keyId; try { keyId = payload.getMetadataOrThrow(METADATA_ENCRYPTION_KEY_ID_KEY).toString(UTF_8); } catch (Exception e) { throw new PayloadCodecException(e); } SecretKey key = getKey(keyId); byte[] plainData; Payload decryptedPayload; try { plainData = decrypt(payload.getData().toByteArray(), key); // The decrypt method contains your custom decryption logic. decryptedPayload = Payload.parseFrom(plainData); return decryptedPayload; } catch (Throwable e) { throw new PayloadCodecException(e); } } else { return payload; } } private String getKeyId() { // Currently there is no context available to vary which key is used. // Use a fixed key for all payloads. // This still supports key rotation as the key ID is recorded on payloads allowing // decryption to use a previous key. return "test-key-test-key-test-key-test!"; } private SecretKey getKey(String keyId) { // Key must be fetched from KMS or other secure storage. // Hard coded here only for example purposes. return new SecretKeySpec(keyId.getBytes(UTF_8), "AES"); } //... } ``` **Set Data Converter to use custom Payload Codec** Use `CodecDataConverter` with an instance of a Data Converter and the custom `PayloadCodec` in the `WorkflowClient` options that you use in your Worker process and to start your Workflow Executions. For example, to set a custom `PayloadCodec` implementation with `DefaultDataConverter`, use the following code: ```java WorkflowServiceStubs service = WorkflowServiceStubs.newLocalServiceStubs(); // Client that can be used to start and signal Workflows WorkflowClient client = WorkflowClient.newInstance( service, WorkflowClientOptions.newBuilder() .setDataConverter( new CodecDataConverter( DefaultDataConverter.newDefaultInstance(), Collections.singletonList(new YourCustomPayloadCodec()))) // Sets the custom Payload Codec created in the previous example with an instance of the default Data Converter. .build()); ``` - Data **encoding** is performed by the client using the converters and codecs provided by Temporal or your custom implementation when passing input to the Temporal Cluster. For example, plain text input is usually serialized into a JSON object, and can then be compressed or encrypted. - Data **decoding** may be performed by your application logic during your Workflows or Activities as necessary, but decoded Workflow results are never persisted back to the Temporal Cluster. Instead, they are stored encoded on the Cluster, and you need to provide an additional parameter when using the [temporal workflow show](/cli/workflow#show) command or when browsing the Web UI to view output. For reference, see the [Encryption](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/encryptedpayloads) sample. ### Using a Codec Server A Codec Server is an HTTP server that uses your custom Codec logic to decode your data remotely. The Codec Server is independent of the Temporal Cluster and decodes your encrypted payloads through predefined endpoints. You create, operate, and manage access to your Codec Server in your own environment. The Temporal CLI and the Web UI in turn provide built-in hooks to call the Codec Server to decode encrypted payloads on demand. Refer to the [Codec Server](/production-deployment/data-encryption) documentation for information on how to design and deploy a Codec Server. For reference, see the [Codec server](https://github.com/temporalio/sdk-java/tree/master/temporal-remote-data-encoder) sample. ## Using custom Payload conversion {#custom-payload-conversion} **How to do custom Payload conversion using the Java SDK.** Temporal SDKs provide a [Payload Converter](/payload-converter) that can be customized to convert a custom data type to [Payload](/dataconversion#payload) and back. Implementing custom Payload conversion is optional. It is needed only if the [default Data Converter](/default-custom-data-converters#default-data-converter) does not support your custom values. To support custom Payload conversion, create a [custom Payload Converter](/payload-converter#composite-data-converters) and configure the Data Converter to use it in your Client options. The order in which your encoding Payload Converters are applied depend on the order given to the Data Converter. You can set multiple encoding Payload Converters to run your conversions. When the Data Converter receives a value for conversion, it passes through each Payload Converter in sequence until the converter that handles the data type does the conversion. Payload Converters can be customized independently of a Payload Codec. Temporal's Converter architecture looks like this: Create a custom implementation of a [PayloadConverter](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/common/converter/PayloadConverter.html) interface and use the `withPayloadConverterOverrides` method to implement the custom object conversion with `DefaultDataConverter`. `PayloadConverter` serializes and deserializes method parameters that need to be sent over the wire. You can create a custom implementation of `PayloadConverter` for custom formats, as shown in the following example: ```java /** Payload Converter specific to your custom object */ public class YourCustomPayloadConverter implements PayloadConverter { //... @Override public String getEncodingType() { return "json/plain"; // The encoding type determines which default conversion behavior to override. } @Override public Optional toData(Object value) throws DataConverterException { // Add your convert-to logic here. } @Override public T fromData(Payload content, Class valueClass, Type valueType) throws DataConverterException { // Add your convert-from logic here. } //... } ``` You can also use [specific implementation classes](https://www.javadoc.io/static/io.temporal/temporal-sdk/1.18.1/io/temporal/common/converter/package-summary.html) provided in the Java SDK. For example, to create a custom `JacksonJsonPayloadConverter`, use the following: ```java //... private static JacksonJsonPayloadConverter yourCustomJacksonJsonPayloadConverter() { ObjectMapper objectMapper = new ObjectMapper(); // Add your custom logic here. return new JacksonJsonPayloadConverter(objectMapper); } //... ``` To set your custom Payload Converter, use it with [withPayloadConverterOverrides](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/common/converter/DefaultDataConverter.html#withPayloadConverterOverrides(io.temporal.common.converter.PayloadConverter...)) with a new instance of `DefaultDataConverter` in your `WorkflowClient` options that you use in your Worker process and to start your Workflow Executions. The following example shows how to set a custom `YourCustomPayloadConverter` Payload Converter. ```java //... DefaultDataConverter ddc = DefaultDataConverter.newDefaultInstance() .withPayloadConverterOverrides(new YourCustomPayloadConverter()); WorkflowClientOptions workflowClientOptions = WorkflowClientOptions.newBuilder().setDataConverter(ddc).build(); //... ``` --- ## Core application - Java SDK This page shows how to do the following: - [Develop a Workflow Definition](#develop-workflows) - [Develop a basic Activity](#develop-activities) - [Start an Activity Execution](#activity-execution) - [Run a Development Worker](#run-a-dev-worker) ## Develop a Workflow Definition {#develop-workflows} **How to develop a Workflow Definition using the Java SDK.** Workflows are the fundamental unit of a Temporal Application, and it all starts with the development of a [Workflow Definition](/workflow-definition). In the Temporal Java SDK programming model, a Workflow Definition comprises a Workflow interface annotated with `@WorkflowInterface` and a Workflow implementation that implements the Workflow interface. The Workflow interface is a Java interface and is annotated with `@WorkflowInterface`. Each Workflow interface must have only one method annotated with `@WorkflowMethod`. ```java // Workflow interface @WorkflowInterface public interface YourWorkflow { @WorkflowMethod String yourWFMethod(Arguments args); } ``` However, when using dynamic Workflows, do not specify a `@WorkflowMethod`, and implement the `DynamicWorkflow` directly in the Workflow implementation code. The `@WorkflowMethod` identifies the method that is the starting point of the Workflow Execution. The Workflow Execution completes when this method completes. You can create [interface inheritance hierarchies](#interface-inheritance) to reuse components across other Workflow interfaces. The interface inheritance approach does not apply to `@WorkflowMethod` annotations. A Workflow implementation implements a Workflow interface. ```java // Define the Workflow implementation which implements our getGreeting Workflow method. public static class GreetingWorkflowImpl implements GreetingWorkflow { ... } } ``` To call Activities in your Workflow, call the Activity implementation. Use `ExternalWorkflowStub` to start or send Signals from within a Workflow to other running Workflow Executions. You can also invoke other Workflows as Child Workflows with `Workflow.newChildWorkflowStub()` or `Workflow.newUntypedChildWorkflowStub()` within a Workflow Definition. ### Workflow interface inheritance {#interface-inheritance} Workflow interfaces can form inheritance hierarchies. It may be useful for creating reusable components across multiple Workflow interfaces. For example imagine a UI or CLI button that allows a `retryNow` Signal on any Workflow. To implement this feature you can redesign an interface like the following: ```java public interface Retryable { @SignalMethod void retryNow(); } @WorkflowInterface public interface FileProcessingWorkflow extends Retryable { @WorkflowMethod String processFile(Arguments args); @QueryMethod(name="history") List getHistory(); @QueryMethod String getStatus(); @SignalMethod void abandon(); } ``` Then some other Workflow interface can extend just `Retryable`, for example: ```java @WorkflowInterface public interface MediaProcessingWorkflow extends Retryable { @WorkflowMethod String processBlob(Arguments args); } ``` Now if we have two running Workflows, one that implements the `FileProcessingWorkflow` interface and another that implements the `MediaProcessingWorkflow` interface, we can Signal to both using their common interface and knowing their WorkflowIds, for example: ```java Retryable r1 = client.newWorkflowStub(Retryable.class, firstWorkflowId); Retryable r2 = client.newWorkflowStub(Retryable.class, secondWorkflowId); r1.retryNow(); r2.retryNow(); ``` The same technique can be used to query Workflows using a base Workflow interface. Note that this approach does not apply to `@WorkflowMethod` annotations, meaning that when using a base interface, it should not include any `@WorkflowMethod` methods. To illustrate this, lets' say that we define the following **invalid** code: ```java // INVALID CODE! public interface BaseWorkflow { @WorkflowMethod void retryNow(); } @WorkflowInterface public interface Workflow1 extends BaseWorkflow {} @WorkflowInterface public interface Workflow2 extends BaseWorkflow {} ``` Any attempt to register both implementations with the Worker will fail. Let's say that we have: ```java worker.registerWorkflowImplementationTypes( Workflow1Impl.class, Workflow2Impl.class); ``` This registration will fail with: ```text java.lang.IllegalStateException: BaseWorkflow workflow type is already registered with the worker ``` ### Define Workflow parameters {#workflow-parameters} **How to define Workflow parameters using the Java SDK.** Temporal Workflows may have any number of custom parameters. However, we strongly recommend that objects are used as parameters, so that the object's individual fields may be altered without breaking the signature of the Workflow. All Workflow Definition parameters must be serializable. A method annotated with `@WorkflowMethod` can have any number of parameters. We recommend passing a single parameter that contains all the input fields to allow for adding fields in a backward-compatible manner. Note that all inputs should be serializable by the default Jackson JSON Payload Converter. You can create a custom object and pass it to the Workflow method, as shown in the following example. ```java //... @WorkflowInterface public interface YourWorkflow { @WorkflowMethod String yourWFMethod(CustomObj customobj); // ... } ``` ### Define Workflow return parameters {#workflow-return-values} **How to define Workflow return parameters using the Java SDK.** Workflow return values must also be serializable. Returning results, returning errors, or throwing exceptions is fairly idiomatic in each language that is supported. However, Temporal APIs that must be used to get the result of a Workflow Execution will only ever receive one of either the result or the error. Workflow method arguments and return values must be serializable and deserializable using the provided [`DataConverter`](https://www.javadoc.io/static/io.temporal/temporal-sdk/1.17.0/io/temporal/common/converter/DataConverter.html). The `execute` method for `DynamicWorkflow` can return type Object. Ensure that your Client can handle an Object type return or is able to convert the Object type response. Related references: - [Data Converter](/dataconversion) - Java DataConverter reference: [https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/common/converter/DataConverter.html](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/common/converter/DataConverter.html) ### Customize your Workflow Type {#workflow-type} **How to customize your Workflow Type using the Java SDK.** Workflows have a Type that are referred to as the Workflow name. The following examples demonstrate how to set a custom name for your Workflow Type. The Workflow Type defaults to the short name of the Workflow interface. In the following example, the Workflow Type defaults to `NotifyUserAccounts`. ```java @WorkflowInterface public interface NotifyUserAccounts { @WorkflowMethod void notify(String[] accountIds); } ``` To overwrite this default naming and assign a custom Workflow Type, use the `@WorkflowMethod` annotation with the `name` parameter. In the following example, the Workflow Type is set to `your-workflow`. ```java @WorkflowInterface public interface NotifyUserAccounts { @WorkflowMethod(name = "your-workflow") void notify(String[] accountIds); } ``` When you set the Workflow Type this way, the value of the `name` parameter does not have to start with an uppercase letter. ### Workflow logic requirements {#workflow-logic-requirements} Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). Therefore, each language is limited to the use of certain idiomatic techniques. However, each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. When defining Workflows using the Temporal Java SDK, the Workflow code must be written to execute effectively once and to completion. The following constraints apply when writing Workflow Definitions: - Do not use mutable global variables in your Workflow implementations. This will ensure that multiple Workflow instances are fully isolated. - Your Workflow code must be deterministic. Do not call non-deterministic functions (such as non-seeded random or `UUID.randomUUID()`) directly from the Workflow code. The Temporal SDK provides specific API for calling non-deterministic code in your Workflows. - Do not use programming language constructs that rely on system time. For example, only use `Workflow.currentTimeMillis()` to get the current time inside a Workflow. - Do not use native Java `Thread` or any other multi-threaded classes like `ThreadPoolExecutor`. Use `Async.function` or `Async.procedure`, provided by the Temporal SDK, to execute code asynchronously. - Do not use synchronization, locks, or other standard Java blocking concurrency-related classes besides those provided by the Workflow class. There is no need for explicit synchronization because multi-threaded code inside a Workflow is executed one thread at a time and under a global lock. - Call `Workflow.sleep` instead of `Thread.sleep`. - Use `Promise` and `CompletablePromise` instead of `Future` and `CompletableFuture`. - Use `WorkflowQueue` instead of `BlockingQueue`. - Use `Workflow.getVersion` when making any changes to the Workflow code. Without this, any deployment of updated Workflow code might break already running Workflows. - Do not access configuration APIs directly from a Workflow because changes in the configuration might affect a Workflow Execution path. Pass it as an argument to a Workflow function or use an Activity to load it. - Use `DynamicWorkflow` when you need a default Workflow that can handle all Workflow Types that are not registered with a Worker. A single implementation can implement a Workflow Type which by definition is dynamically loaded from some external source. All standard `WorkflowOptions` and determinism rules apply to Dynamic Workflow implementations. Java Workflow reference: [https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/package-summary.html](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/package-summary.html) ## Develop a basic Activity {#develop-activities} **How to develop a basic Activity using the Java SDK.** One of the primary things that Workflows do is orchestrate the execution of Activities. An Activity is a normal function or method execution that's intended to execute a single, well-defined action (either short or long-running), such as querying a database, calling a third-party API, or transcoding a media file. An Activity can interact with world outside the Temporal Platform or use a Temporal Client to interact with a Temporal Service. For the Workflow to be able to execute the Activity, we must define the [Activity Definition](/activity-definition). An [Activity Definition](/activities) is a combination of the Temporal Java SDK [Activity](https://www.javadoc.io/static/io.temporal/temporal-sdk/0.19.0/io/temporal/activity/Activity.html) Class implementing a specially annotated interface. An Activity interface is annotated with `@ActivityInterface` and an Activity implementation implements this Activity interface. To handle Activity types that do not have an explicitly registered handler, you can directly implement a dynamic Activity. ```java @ActivityInterface public interface GreetingActivities { String composeGreeting(String greeting, String language); } ``` Each method defined in the Activity interface defines a separate Activity method. You can annotate each method in the Activity interface with the `@ActivityMethod` annotation, but this is completely optional. The following example uses the `@ActivityMethod` annotation for the method defined in the previous example. ```java @ActivityInterface public interface GreetingActivities { @ActivityMethod String composeGreeting(String greeting, String language); } ``` An Activity implementation is a Java class that implements an Activity annotated interface. ```java // Implementation for the GreetingActivities interface example from in the previous section static class GreetingActivitiesImpl implements GreetingActivities { @Override public String composeGreeting(String greeting, String name) { return greeting + " " + name + "!"; } } ``` ### Define Activity parameters {#activity-parameters} **How to define Activity parameters using the Java SDK.** There is no explicit limit to the total number of parameters that an [Activity Definition](/activity-definition) may support. However, there is a limit to the total size of the data that ends up encoded into a gRPC message Payload. A single argument is limited to a maximum size of 2 MB. And the total size of a gRPC message, which includes all the arguments, is limited to a maximum of 4 MB. Also, keep in mind that all Payload data is recorded in the [Workflow Execution Event History](/workflow-execution/event#event-history) and large Event Histories can affect Worker performance. This is because the entire Event History could be transferred to a Worker Process with a [Workflow Task](/tasks#workflow-task). {/* TODO link to gRPC limit section when available */} Some SDKs require that you pass context objects, others do not. When it comes to your application data—that is, data that is serialized and encoded into a Payload—we recommend that you use a single object as an argument that wraps the application data passed to Activities. This is so that you can change what data is passed to the Activity without breaking a function or method signature. An Activity interface can have any number of parameters. All inputs should be serializable by the default Jackson JSON Payload Converter. When implementing Activities, be mindful of the amount of data that you transfer using the Activity invocation parameters or return values as these are recorded in the Workflow Execution Events History. Large Events Histories can adversely impact performance. You can create a custom object, and pass it to the Activity interface, as shown in the following example. ```java @ActivityInterface public interface YourActivities { String getCustomObject(CustomObj customobj); void sendCustomObject(CustomObj customobj, String abc); } ``` The `execute` method in the dynamic Activity interface implementation takes in `EncodedValues` that are inputs to the Activity Execution, as shown in the following example. ```java // Dynamic Activity implementation public static class DynamicActivityImpl implements DynamicActivity { @Override public Object execute(EncodedValues args) { String activityType = Activity.getExecutionContext().getInfo().getActivityType(); return activityType + ": " + args.get(0, String.class) + " " + args.get(1, String.class) + " from: " + args.get(2, String.class); } } ``` For more details, see [Dynamic Activity Reference](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/DynamicActivity.html). ### Define Activity return values {#activity-return-values} **How to define Activity return values using the Java SDK.** All data returned from an Activity must be serializable. Activity return values are subject to payload size limits in Temporal. The default payload size limit is 2MB, and there is a hard limit of 4MB for any gRPC message size in the Event History transaction ([see Cloud limits here](https://docs.temporal.io/cloud/limits#per-message-grpc-limit)). Keep in mind that all return values are recorded in a [Workflow Execution Event History](/workflow-execution/event#event-history). Activity return values must be serializable and deserializable by the provided [`DataConverter`](https://www.javadoc.io/static/io.temporal/temporal-sdk/1.17.0/io/temporal/common/converter/DataConverter.html). The `execute` method for `DynamicActivity` can return type Object. Ensure that your Workflow or Client can handle an Object type return or is able to convert the Object type response. - [Data Converter](/dataconversion) - Java DataConverter reference: [https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/common/converter/DataConverter.html](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/common/converter/DataConverter.html) ### Customize your Activity Type {#activity-type} **How to customize your Activity Type using the Java SDK.** Each Activity has a Type, which may also be referred to as the Activity 'name'. This name appears in the Workflow Execution Event History in the Summary tab for each Activity Task. The name lets you identify Activity Types called during the Execution. Custom Activity Type names prevent name collisions across interfaces and Workflows. They offer descriptive Activity method names without concerns about re-using those names elsewhere in your project. They also support code management, especially in larger projects with many Activities. For example, you might use a prefix to group related activities together. Custom names also distinguish keys for gathering metrics without name conflicts. The following examples show how to set custom names for your Activity Type. **Default behavior** By default, an Activity Type is the method name with the first letter capitalized: ```java @ActivityInterface public interface GreetingActivities { String sendMessage(String input); @ActivityMethod String composeGreeting(String greeting, String language); } ``` - Method Name: `sendMessage` - Activity Type: `SendMessage` - Method Name: `composeGreeting` - Activity Type: `ComposeGreeting` **Custom Prefix** Using the `namePrefix` parameter in the `@ActivityInterface` annotation adds a prefix to each Activity Type name mentioned in the interface, unless the prefix is specifically overridden: ```java @ActivityInterface(namePrefix = "Messaging_") public interface GreetingActivities { String sendMessage(String input); @ActivityMethod String composeGreeting(String greeting, String language); } ``` - Method Name: `sendMessage` - Activity Type: `Messaging_SendMessage` - Method Name: `composeGreeting` - Activity Type: `Messaging_ComposeGreeting` The Activity Type is capitalized, even when using a prefix. **Custom Name** To override the default name and any inherited prefixes, use the `name` parameter in the `@ActivityMethod` annotation: ```java @ActivityInterface(namePrefix = "Messaging_") public interface GreetingActivities { String sendMessage(String input); @ActivityMethod String composeGreeting(String greeting, String language); @ActivityMethod(name = "farewell") String composeFarewell(String farewell, String language); } ``` Using the `name` parameter won't automatically capitalize the result: - Method Name: `sendMessage` - Activity Type: `Messaging_SendMessage` - Method Name: `composeGreeting` - Activity Type: `Messaging_ComposeGreeting` - Method Name: `composeFarewell` - Activity Type: `farewell` Be cautious with names that contain special characters, as these can be used as metric tags. Systems such as Prometheus may ignore metrics with tags using unsupported characters. ## Start an Activity Execution {#activity-execution} **How to start an Activity Execution using the Java SDK.** Calls to spawn [Activity Executions](/activity-execution) are written within a [Workflow Definition](/workflow-definition). The call to spawn an Activity Execution generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command. This results in the set of three [Activity Task](/tasks#activity-task) related Events ([ActivityTaskScheduled](/references/events#activitytaskscheduled), [ActivityTaskStarted](/references/events#activitytaskstarted), and ActivityTask[Closed])in your Workflow Execution Event History. A single instance of the Activities implementation is shared across multiple simultaneous Activity invocations. Activity implementation code should be _idempotent_. The values passed to Activities through invocation parameters or returned through a result value are recorded in the Execution history. The entire Execution history is transferred from the Temporal service to Workflow Workers when a Workflow state needs to recover. A large Execution history can thus adversely impact the performance of your Workflow. Therefore, be mindful of the amount of data you transfer through Activity invocation parameters or Return Values. Otherwise, no additional limitations exist on Activity implementations. Activities are remote procedure calls that must be invoked from within a Workflow using `ActivityStub`. Activities are not executable on their own. You cannot start an Activity Execution by itself. Note that before an Activity Execution is invoked: - Activity options (either [`setStartToCloseTimeout`](/encyclopedia/detecting-activity-failures#start-to-close-timeout) or [`ScheduleToCloseTimeout`](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout) are required) must be set for the Activity. For details, see [How to set Activity timeouts](/develop/java/failure-detection#activity-timeouts). - The Activity must be registered with a Worker. See [Worker Program](#run-a-dev-worker) - Activity code must be thread-safe. Activities should only be instantiated using stubs from within a Workflow. An `ActivityStub` returns a client-side stub that implements an Activity interface. You can invoke Activities using `Workflow.newActivityStub`(type-safe) or `Workflow.newUntypedActivityStub` (untyped). Calling a method on the Activity interface schedules the Activity invocation with the Temporal service, and generates an [`ActivityTaskScheduled` Event](/references/events#activitytaskscheduled). Activities can be invoked synchronously or asynchronously. **Invoking Activities Synchronously** In the following example, we use the type-safe `Workflow.newActivityStub` within the "FileProcessingWorkflow" Workflow implementation to create a client-side stub of the `FileProcessingActivities` class. We also define `ActivityOptions` and set `setStartToCloseTimeout` option to one hour. ```java public class FileProcessingWorkflowImpl implements FileProcessingWorkflow { private final FileProcessingActivities activities; public FileProcessingWorkflowImpl() { this.activities = Workflow.newActivityStub( FileProcessingActivities.class, ActivityOptions.newBuilder() .setStartToCloseTimeout(Duration.ofHours(1)) .build()); } @Override public void processFile(Arguments args) { String localName = null; String processedName = null; try { localName = activities.download(args.getSourceBucketName(), args.getSourceFilename()); processedName = activities.processFile(localName); activities.upload(args.getTargetBucketName(), args.getTargetFilename(), processedName); } finally { if (localName != null) { activities.deleteLocalFile(localName); } if (processedName != null) { activities.deleteLocalFile(processedName); } } } // ... } ``` A Workflow can have multiple Activity stubs. Each Activity stub can have its own `ActivityOptions` defined. The following example shows a Workflow implementation with two typed Activity stubs. ```java public FileProcessingWorkflowImpl() { ActivityOptions options1 = ActivityOptions.newBuilder() .setTaskQueue("taskQueue1") .setStartToCloseTimeout(Duration.ofMinutes(10)) .build(); this.store1 = Workflow.newActivityStub(FileProcessingActivities.class, options1); ActivityOptions options2 = ActivityOptions.newBuilder() .setTaskQueue("taskQueue2") .setStartToCloseTimeout(Duration.ofMinutes(5)) .build(); this.store2 = Workflow.newActivityStub(FileProcessingActivities.class, options2); } ``` To invoke Activities inside Workflows without referencing the interface it implements, use an untyped Activity stub `Workflow.newUntypedActivityStub`. This is useful when the Activity type is not known at compile time, or to invoke Activities implemented in different programming languages. ```java // Workflow code ActivityOptions activityOptions = ActivityOptions.newBuilder() .setStartToCloseTimeout(Duration.ofSeconds(3)) .setTaskQueue("simple-queue-node") .build(); ActivityStub activity = Workflow.newUntypedActivityStub(activityOptions); activity.execute("ComposeGreeting", String.class, "Hello World", "Spanish"); ``` **Invoking Activities Asynchronously** Sometimes Workflows need to perform certain operations in parallel. The Temporal Java SDK provides the `Async` class which includes static methods used to invoke any Activity asynchronously. The calls return a result of type `Promise` which is similar to the Java `Future` and `CompletionStage`. When invoking Activities, use `Async.function` for Activities that return a result, and `Async.procedure` for Activities that return void. In the following asynchronous Activity invocation, the method reference is passed to `Async.function` followed by Activity arguments. ```java Promise localNamePromise = Async.function(activities::download, sourceBucket, sourceFile); ``` The following example shows how to call two Activity methods, "download" and "upload", in parallel on multiple files. ```java public void processFile(Arguments args) { List> localNamePromises = new ArrayList<>(); List processedNames = null; try { // Download all files in parallel. for (String sourceFilename : args.getSourceFilenames()) { Promise localName = Async.function(activities::download, args.getSourceBucketName(), sourceFilename); localNamePromises.add(localName); } List localNames = new ArrayList<>(); for (Promise localName : localNamePromises) { localNames.add(localName.get()); } processedNames = activities.processFiles(localNames); // Upload all results in parallel. List> uploadedList = new ArrayList<>(); for (String processedName : processedNames) { Promise uploaded = Async.procedure( activities::upload, args.getTargetBucketName(), args.getTargetFilename(), processedName); uploadedList.add(uploaded); } // Wait for all uploads to complete. Promise.allOf(uploadedList).get(); } finally { for (Promise localNamePromise : localNamePromises) { // Skip files that haven't completed downloading. if (localNamePromise.isCompleted()) { activities.deleteLocalFile(localNamePromise.get()); } } if (processedNames != null) { for (String processedName : processedNames) { activities.deleteLocalFile(processedName); } } } } ``` **Activity Execution Context** `ActivityExecutionContext` is a context object passed to each Activity implementation by default. You can access it in your Activity implementations via `Activity.getExecutionContext()`. It provides getters to access information about the Workflow that invoked the Activity. Note that the Activity context information is stored in a thread-local variable. Therefore, calls to `getExecutionContext()` succeed only within the thread that invoked the Activity function. Following is an example of using the `ActivityExecutionContext`: ```java public class FileProcessingActivitiesImpl implements FileProcessingActivities { @Override public String download(String bucketName, String remoteName, String localName) { ActivityExecutionContext ctx = Activity.getExecutionContext(); ActivityInfo info = ctx.getInfo(); log.info("namespace=" + info.getActivityNamespace()); log.info("workflowId=" + info.getWorkflowId()); log.info("runId=" + info.getRunId()); log.info("activityId=" + info.getActivityId()); log.info("activityTimeout=" + info.getStartToCloseTimeout(); return downloadFileFromS3(bucketName, remoteName, localDirectory + localName); } ... } ``` For details on getting the results of an Activity Execution, see [Activity Execution Result](#activity-execution-result). ### Set required Activity Timeouts {#required-timeout} **How to set required Activity Timeouts using the Java SDK.** Activity Execution semantics rely on several parameters. The only required value that needs to be set is either a [Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout) or a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout). These values are set in the Activity Options. Set your Activity Timeout from the [`ActivityOptions.Builder`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/ActivityOptions.Builder.html) class. Available timeouts are: - ScheduleToCloseTimeout() - ScheduleToStartTimeout() - StartToCloseTimeout() You can set Activity Options using an `ActivityStub` within a Workflow implementation, or per-Activity using `WorkflowImplementationOptions` within a Worker. The following uses `ActivityStub`. ```java GreetingActivities activities = Workflow.newActivityStub(GreetingActivities.class, ActivityOptions.newBuilder() .setScheduleToCloseTimeout(Duration.ofSeconds(5)) // .setStartToCloseTimeout(Duration.ofSeconds(2) // .setScheduletoCloseTimeout(Duration.ofSeconds(20)) .build()); ``` The following uses `WorkflowImplementationOptions`. ```java WorkflowImplementationOptions options = WorkflowImplementationOptions.newBuilder() .setActivityOptions( ImmutableMap.of( "GetCustomerGreeting", // Set Activity Execution timeout ActivityOptions.newBuilder() .setScheduleToCloseTimeout(Duration.ofSeconds(5)) // .setStartToCloseTimeout(Duration.ofSeconds(2)) // .setScheduleToStartTimeout(Duration.ofSeconds(5)) .build())) .build(); ``` :::note If you define options per-Activity Type options with `WorkflowImplementationOptions.setActivityOptions()`, setting them again specifically with `ActivityStub` in a Workflow will override this setting. ::: ### Java ActivityOptions reference {#activity-options-reference} Use [`ActivityOptions`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/ActivityOptions.Builder.html) to configure how to invoke an Activity Execution. You can set Activity Options using an `ActivityStub` within a Workflow implementation, or per-Activity using `WorkflowImplementationOptions` within a Worker. Note that if you define options per-Activity Type options with `WorkflowImplementationOptions.setActivityOptions()`, setting them again specifically with `ActivityStub` in a Workflow will override this setting. The following table lists all `ActivityOptions` that can be configured for an Activity invocation. | Option | Required | Type | | ------------------------------------------------------ | -------------------------------------------------- | ------------------------ | | [`setScheduleToCloseTimeout`](#scheduletoclosetimeout) | Yes (if `StartToCloseTimeout` is not specified) | Duration | | [`setScheduleToStartTimeout`](#scheduletostarttimeout) | No | Duration | | [`setStartToCloseTimeout`](#starttoclosetimeout) | Yes (if `ScheduleToCloseTimeout` is not specified) | Duration | | [`setHeartbeatTimeout`](#heartbeattimeout) | No | Duration | | [`setTaskQueue`](#taskqueue) | No | String | | [`setRetryOptions`](#retryoptions) | No | RetryOptions | | [`setCancellationType`](#setcancellationtype) | No | ActivityCancellationType | #### ScheduleToCloseTimeout To set a [Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout), use [`ActivityOptions.newBuilder.setScheduleToCloseTimeout​`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/ActivityOptions.Builder.html). This or `StartToCloseTimeout` must be set. - Type: `Duration` - Default: Unlimited. Note that if `WorkflowRunTimeout` and/or `WorkflowExecutionTimeout` are defined in the Workflow, all Activity retries will stop when either or both of these timeouts are reached. You can set Activity Options using an `ActivityStub` within a Workflow implementation, or per-Activity using `WorkflowImplementationOptions` within a Worker. Note that if you define options per-Activity Type options with `WorkflowImplementationOptions.setActivityOptions()`, setting them again specifically with `ActivityStub` in a Workflow will override this setting. - With `ActivityStub` ```java GreetingActivities activities = Workflow.newActivityStub(GreetingActivities.class, ActivityOptions.newBuilder() .setScheduleToCloseTimeout(Duration.ofSeconds(5)) .build()); ``` - With `WorkflowImplementationOptions` ```java WorkflowImplementationOptions options = WorkflowImplementationOptions.newBuilder() .setActivityOptions( ImmutableMap.of( "GetCustomerGreeting", ActivityOptions.newBuilder() .setScheduleToCloseTimeout(Duration.ofSeconds(5)) .build())) .build(); ``` #### ScheduleToStartTimeout To set a [Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout), use [`ActivityOptions.newBuilder.setScheduleToStartTimeout​`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/ActivityOptions.Builder.html). - Type: `Duration` - Default: Unlimited. This timeout is non-retryable. You can set Activity Options using an `ActivityStub` within a Workflow implementation, or per-Activity using `WorkflowImplementationOptions` within a Worker. Note that if you define options per-Activity Type options with `WorkflowImplementationOptions.setActivityOptions()`, setting them again specifically with `ActivityStub` in a Workflow will override this setting. - With `ActivityStub` ```java GreetingActivities activities = Workflow.newActivityStub(GreetingActivities.class, ActivityOptions.newBuilder() .setScheduleToStartTimeout(Duration.ofSeconds(5)) // note that either StartToCloseTimeout or ScheduleToCloseTimeout are // required when setting Activity options. .setScheduletoCloseTimeout(Duration.ofSeconds(20)) .build()); ``` - With `WorkflowImplementationOptions` ```java WorkflowImplementationOptions options = WorkflowImplementationOptions.newBuilder() .setActivityOptions( ImmutableMap.of( "GetCustomerGreeting", ActivityOptions.newBuilder() .setScheduleToStartTimeout(Duration.ofSeconds(5)) .build())) .build(); ``` #### StartToCloseTimeout To set a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout), use [`ActivityOptions.newBuilder.setStartToCloseTimeout​`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/ActivityOptions.Builder.html). This or `ScheduleToClose` must be set. - Type: `Duration` - Default: Defaults to [`ScheduleToCloseTimeout`](#scheduletoclosetimeout) value You can set Activity Options using an `ActivityStub` within a Workflow implementation, or per-Activity using `WorkflowImplementationOptions` within a Worker. Note that if you define options per-Activity Type options with `WorkflowImplementationOptions.setActivityOptions()`, setting them again specifically with `ActivityStub` in a Workflow will override this setting. - With `ActivityStub` ```java GreetingActivities activities = Workflow.newActivityStub(GreetingActivities.class, ActivityOptions.newBuilder() .setStartToCloseTimeout(Duration.ofSeconds(2)) .build()); ``` - With `WorkflowImplementationOptions` ```java WorkflowImplementationOptions options = WorkflowImplementationOptions.newBuilder() .setActivityOptions( ImmutableMap.of( "EmailCustomerGreeting", ActivityOptions.newBuilder() // Set Activity Execution timeout (single run) .setStartToCloseTimeout(Duration.ofSeconds(2)) .build())) .build(); ``` #### HeartbeatTimeout To set a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout), use [`ActivityOptions.newBuilder.setHeartbeatTimeout`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/ActivityOptions.Builder.html). - Type: `Duration` - Default: None You can set Activity Options using an `ActivityStub` within a Workflow implementation, or per-Activity using `WorkflowImplementationOptions` within a Worker. Note that if you define options per-Activity Type options with `WorkflowImplementationOptions.setActivityOptions()`, setting them again specifically with `ActivityStub` in a Workflow will override this setting. - With `ActivityStub` ```java private final GreetingActivities activities = Workflow.newActivityStub( GreetingActivities.class, ActivityOptions.newBuilder() // note that either StartToCloseTimeout or ScheduleToCloseTimeout are // required when setting Activity options. .setStartToCloseTimeout(Duration.ofSeconds(5)) .setHeartbeatTimeout(Duration.ofSeconds(2)) .build()); ``` - With `WorkflowImplementationOptions` ```java WorkflowImplementationOptions options = WorkflowImplementationOptions.newBuilder() .setActivityOptions( ImmutableMap.of( "EmailCustomerGreeting", ActivityOptions.newBuilder() // note that either StartToCloseTimeout or ScheduleToCloseTimeout are // required when setting Activity options. .setStartToCloseTimeout(Duration.ofSeconds(5)) .setHeartbeatTimeout(Duration.ofSeconds(2)) .build())) .build(); ``` #### TaskQueue - Type: `String` - Default: Defaults to the Task Queue that the Workflow was started with. - With `ActivityStub` ```java GreetingActivities activities = Workflow.newActivityStub(GreetingActivities.class, ActivityOptions.newBuilder() // note that either StartToCloseTimeout or ScheduleToCloseTimeout are required when // setting Activity options. .setStartToCloseTimeout(Duration.ofSeconds(5)) .setTaskQueue("yourTaskQueue") .build()); ``` - With `WorkflowImplementationOptions` ```java WorkflowImplementationOptions options = WorkflowImplementationOptions.newBuilder() .setActivityOptions( ImmutableMap.of( "EmailCustomerGreeting", ActivityOptions.newBuilder() // note that either StartToCloseTimeout or ScheduleToCloseTimeout are // required when setting Activity options. .setStartToCloseTimeout(Duration.ofSeconds(5)) .setTaskQueue("yourTaskQueue") .build())) .build(); ``` See [Task Queue](/task-queue) #### RetryOptions To set a Retry Policy, known as the [Retry Options](/encyclopedia/retry-policies) in Java, use [`ActivityOptions.newBuilder.setRetryOptions()`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/ActivityOptions.Builder.html). - Type: `RetryOptions` - Default: Server-defined Activity Retry policy. - With `ActivityStub` ```java private final ActivityOptions options = ActivityOptions.newBuilder() // note that either StartToCloseTimeout or ScheduleToCloseTimeout are // required when setting Activity options. .setStartToCloseTimeout(Duration.ofSeconds(5)) .setRetryOptions( RetryOptions.newBuilder() .setInitialInterval(Duration.ofSeconds(1)) .setMaximumInterval(Duration.ofSeconds(10)) .build()) .build(); ``` - With `WorkflowImplementationOptions` ```java WorkflowImplementationOptions options = WorkflowImplementationOptions.newBuilder() .setActivityOptions( ImmutableMap.of( "EmailCustomerGreeting", ActivityOptions.newBuilder() // note that either StartToCloseTimeout or ScheduleToCloseTimeout are // required when setting Activity options. .setStartToCloseTimeout(Duration.ofSeconds(5)) .setRetryOptions( RetryOptions.newBuilder() .setDoNotRetry(NullPointerException.class.getName()) .build()) .build())) .build(); ``` #### setCancellationType - Type: `ActivityCancellationType` - Default: `ActivityCancellationType.TRY_CANCEL` - With `ActivityStub` ```java private final GreetingActivities activities = Workflow.newActivityStub( GreetingActivities.class, ActivityOptions.newBuilder() .setCancellationType(ActivityCancellationType.WAIT_CANCELLATION_COMPLETED) .build()); ``` - With `WorkflowImplementationOptions` ```java WorkflowImplementationOptions options = WorkflowImplementationOptions.newBuilder() .setActivityOptions( ImmutableMap.of( "EmailCustomerGreeting", ActivityOptions.newBuilder() .setCancellationType(ActivityCancellationType.WAIT_CANCELLATION_COMPLETED) .build())) .build(); ``` ### Get the result of an Activity Execution {#activity-execution-result} **How to get the result of an Activity Execution using the Java SDK.** The call to spawn an [Activity Execution](/activity-execution) generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command and provides the Workflow with an Awaitable. Workflow Executions can either block progress until the result is available through the Awaitable or continue progressing, making use of the result when it becomes available. To get the results of an asynchronously invoked Activity method, use the `Promise` `get` method to block until the Activity method result is available. Sometimes an Activity Execution lifecycle goes beyond a synchronous method invocation. For example, a request can be put in a queue and later a reply comes and is picked up by a different Worker process. The whole request-reply interaction can be modeled as a single Activity. To indicate that an Activity should not be completed upon its method return, call `ActivityExecutionContext.doNotCompleteOnReturn()` from the original Activity thread. Then later, when replies come, complete the Activity using the `ActivityCompletionClient`. To correlate Activity invocation with completion, use either a `TaskToken` or Workflow and Activity Ids. Following is an example of using `ActivityExecutionContext.doNotCompleteOnReturn()`: ```java public class FileProcessingActivitiesImpl implements FileProcessingActivities { public String download(String bucketName, String remoteName, String localName) { ActivityExecutionContext ctx = Activity.getExecutionContext(); // Used to correlate reply byte[] taskToken = ctx.getInfo().getTaskToken(); asyncDownloadFileFromS3(taskToken, bucketName, remoteName, localDirectory + localName); ctx.doNotCompleteOnReturn(); // Return value is ignored when doNotCompleteOnReturn was called. return "ignored"; } ... } ``` When the download is complete, the download service potentially can complete the Activity, or fail it from a different process, for example: ```java public void completeActivity(byte[] taskToken, R result) { completionClient.complete(taskToken, result); } public void failActivity(byte[] taskToken, Exception failure) { completionClient.completeExceptionally(taskToken, failure); } ``` ## Develop a Worker Program in Java {#run-a-dev-worker} **How to develop a Worker Program using the Java SDK.** The [Worker Process](/workers#worker-process) is where Workflow Functions and Activity Functions are executed. - Each [Worker Entity](/workers#worker-entity) in the Worker Process must register the exact Workflow Types and Activity Types it may execute. - Each Worker Entity must also associate itself with exactly one [Task Queue](/task-queue). - Each Worker Entity polling the same Task Queue must be registered with the same Workflow Types and Activity Types. A [Worker Entity](/workers#worker-entity) is the component within a Worker Process that listens to a specific Task Queue. Although multiple Worker Entities can be in a single Worker Process, a single Worker Entity Worker Process may be perfectly sufficient. For more information, see the [Worker tuning guide](/develop/worker-performance). A Worker Entity contains a Workflow Worker and/or an Activity Worker, which makes progress on Workflow Executions and Activity Executions, respectively. Use the `newWorker` method on an instance of a [`WorkerFactory`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/worker/WorkerFactory.html) to create a new Worker in Java. A single Worker Entity can contain many Worker Objects. Call the `start()` method on the instance of the `WorkerFactory` to start all the Workers created in this process. ```java // ... public class YourWorker { public static void main(String[] args) { WorkflowServiceStubs service = WorkflowServiceStubs.newLocalServiceStubs(); WorkflowClient client = WorkflowClient.newInstance(service); WorkerFactory factory = WorkerFactory.newInstance(client); Worker yourWorker = factory.newWorker("your_task_queue"); // Register Workflow // and/or register Activities factory.start(); } } ``` After creating the Worker entity, register all Workflow Types and all Activity Types that the Worker can execute. A Worker can be registered with just Workflows, just Activities, or both. **Operation guides:** - [How to tune Workers](/develop/worker-performance) ### How to register types {#register-types} **How to register Workflow and Activity Types with a Worker using the Java SDK.** All Workers listening to the same Task Queue name must be registered to handle the exact same Workflows Types and Activity Types. If a Worker polls a Task for a Workflow Type or Activity Type it does not know about, it fails that Task. However, the failure of the Task does not cause the associated Workflow Execution to fail. Use `worker.registerWorkflowImplementationTypes` to register Workflow Type and `worker.registerActivitiesImplementations` to register Activity implementation with Workers. For Workflows, the Workflow Type is registered with a Worker. A Workflow Type can be registered only once per Worker entity. If you define multiple Workflow implementations of the same type, you get an exception at the time of registration. For Activities, Activity implementation instances are registered with a Worker because they are stateless and thread-safe. You can pass any number of dependencies in the Activity implementation constructor, such as the database connections, services, etc. The following example shows how to register a Workflow and an Activity with a Worker. ```java Worker worker = workerFactory.newWorker("your_task_queue"); ... // Register Workflow worker.registerWorkflowImplementationTypes(GreetingWorkflowImpl.class); // Register Activity worker.registerActivitiesImplementations(new GreetingActivitiesImpl()); ``` When you register a single instance of an Activity, you can have multiple instances of Workflow Executions calling the same Activity. Activity code must be thread-safe because the same instance of the Activity code is run for every Workflow Execution that calls it. For `DynamicWorkflow`, only one Workflow implementation that extends `DynamicWorkflow` can be registered with a Worker. The following example shows how to register the `DynamicWorkflow` and `DynamicActivity` implementation with a Worker. ```java public static void main(String[] arg) { WorkflowServiceStubs service = WorkflowServiceStubs.newInstance(); WorkflowClient client = WorkflowClient.newInstance(service); WorkerFactory factory = WorkerFactory.newInstance(client); Worker worker = factory.newWorker(TASK_QUEUE); /* Register the Dynamic Workflow implementation with the Worker. Workflow implementations ** must be known to the Worker at runtime to dispatch Workflow Tasks. */ worker.registerWorkflowImplementationTypes(DynamicGreetingWorkflowImpl.class); // Start all the Workers that are in this process. factory.start(); /* Create the Workflow stub. Note that the Workflow Type is not explicitly registered with the Worker. */ WorkflowOptions workflowOptions = WorkflowOptions.newBuilder().setTaskQueue(TASK_QUEUE).setWorkflowId(WORKFLOW_ID).build(); WorkflowStub workflow = client.newUntypedWorkflowStub("DynamicWF", workflowOptions); /** * Register Dynamic Activity implementation with the Worker. Since Activities are stateless * and thread-safe, we need to register a shared instance. */ worker.registerActivitiesImplementations(new DynamicGreetingActivityImpl()); /* Start Workflow Execution and immmediately send Signal. Pass in the Workflow args and Signal args. */ workflow.signalWithStart("greetingSignal", new Object[] {"John"}, new Object[] {"Hello"}); // Wait for the Workflow to finish getting the results. String result = workflow.getResult(String.class); System.out.println(result); System.exit(0); } } ``` You can register multiple type-specific Workflow implementations alongside a single `DynamicWorkflow` implementation. You can register only one Activity instance that implements `DynamicActivity` with a Worker. --- ## Debugging - Java SDK In addition to writing unit and integration tests, debugging your Workflows is also a very valuable testing tool. You can debug your Workflow code using a debugger provided by your favorite Java IDE. Note that when debugging your Workflow code, the Temporal Java SDK includes deadlock detection which fails a Workflow Task in case the code blocks over a second without relinquishing execution control. Because of this you can often encounter the `PotentialDeadlockException` Exception while stepping through Workflow code during debugging. To alleviate this issue, you can set the `TEMPORAL_DEBUG` environment variable to true before debugging your Workflow code. Make sure to set `TEMPORAL_DEBUG` to true only during debugging. ## How to debug in a development environment {#debug-in-a-development-environment} In addition to the normal development tools of logging and a debugger, you can also see what's happening in your Workflow by using the [Web UI](/web-ui) or [Temporal CLI](/cli). ## How to debug in a production environment {#debug-in-a-production-environment} You can debug production Workflows using: - [Web UI](/web-ui) - [Temporal CLI](/cli) - [Replay](/develop/java/testing-suite#replay) - [Tracing](/develop/java/observability#tracing) - [Logging](/develop/java/observability#logging) You can debug and tune Worker performance with metrics and the [Worker performance guide](/develop/worker-performance). For more information, see [Observability ▶️ Metrics](/develop/java/observability#metrics) for setting up SDK metrics. Debug Server performance with [Cloud metrics](/cloud/metrics/) or [self-hosted Server metrics](/self-hosted-guide/production-checklist#scaling-and-metrics). --- ## Enriching the User Interface - Java SDK Temporal supports adding context to Workflows and Events with metadata. This helps users identify and understand Workflows and their operations. ## Adding Summary and Details to Workflows ### Starting a Workflow When starting a workflow, you can provide a static summary and details to help identify the Workflow in the UI: ```java public class Main { public static void main(String[] args) { // Create service stubs and workflow client WorkflowServiceStubs service = WorkflowServiceStubs.newLocalServiceStubs(); WorkflowClient workflowClient = WorkflowClient.newInstance(service); // Create workflow options with static summary and details WorkflowOptions options = WorkflowOptions.newBuilder() .setWorkflowId("your-workflow-id") .setTaskQueue("your-task-queue") .setStaticSummary("Order processing for customer #12345") .setStaticDetails("Processing premium order with expedited shipping") .build(); // Create the workflow stub YourWorkflow workflow = workflowClient.newWorkflowStub(YourWorkflow.class, options); // Start the workflow String result = workflow.yourWorkflowMethod("workflow input"); } } ``` `setStaticSummary()` sets a single-line description that appears in the Workflow list view, limited to 200 bytes. `setStaticDetails()` sets multi-line comprehensive information that appears in the Workflow details view, with a larger limit of 20K bytes. The input format is standard Markdown excluding images, HTML, and scripts. You can also use `WorkflowClient.start()` for async execution: ```java // Start workflow asynchronously WorkflowExecution execution = WorkflowClient.start(workflow::yourWorkflowMethod, "workflow input"); ``` ### Inside the Workflow Within a Workflow, you can get and set the _current workflow details_. Unlike static summary/details set at Workflow start, this value can be updated throughout the life of the Workflow. Current Workflow details also takes Markdown format (excluding images, HTML, and scripts) and can span multiple lines. ```java public class YourWorkflowImpl implements YourWorkflow { @Override public String yourWorkflowMethod(String input) { // Get the current details String currentDetails = Workflow.getCurrentDetails(); Workflow.getLogger(YourWorkflowImpl.class).info("Current details: " + currentDetails); // Set/update the current details Workflow.setCurrentDetails("Updated workflow details with new status"); return "Workflow completed"; } } ``` ### Adding Summary to Activities and Timers You can attach a `setSummary()` to Activities when starting them from within a Workflow: ```java public class YourWorkflowImpl implements YourWorkflow { private final YourActivities activities = Workflow.newActivityStub(YourActivities.class, ActivityOptions.newBuilder() .setStartToCloseTimeout(Duration.ofSeconds(10)) .setSummary("Processing user data") .build()); @Override public String yourWorkflowMethod(String input) { // Execute the activity with the summary String result = activities.yourActivity(input); return result; } } ``` Similarly, you can attach a `setSummary()` to timers within a Workflow: ```java public class YourWorkflowImpl implements YourWorkflow { @Override public String yourWorkflowMethod(String input) { // Create a timer with a summary Workflow.newTimer(Duration.ofMinutes(5), TimerOptions.newBuilder() .setSummary("Waiting for payment confirmation") .build()) .get(); // Wait for the timer to fire return "Timer completed"; } } ``` The input format for `setSummary()` is a string, and limited to 200 bytes. ## Viewing Summary and Details in the UI Once you've added summaries and details to your Workflows, Activities, and Timers, you can view this enriched information in the Temporal Web UI. Navigate to your Workflow's details page to see the metadata displayed in two key locations: ### Workflow Overview Section At the top of the workflow details page, you'll find the workflow-level metadata: - **Summary & Details** - Displays the static summary and static details set when starting the workflow - **Current Details** - Displays the dynamic details that can be updated during workflow execution All Workflow details support standard Markdown formatting (excluding images, HTML, and scripts), allowing you to create rich, structured information displays. ### Event History Individual events in the Workflow's Event History display their associated summaries when available. Workflow, Activity and Timer summaries appear in purple text next to their corresponding events, providing immediate context without requiring you to expand the Event details. When you do expand an Event, the summary is also prominently displayed in the detailed view. --- ## Failure detection - Java SDK This page shows how to do the following: - [Set Workflow timeouts](#workflow-timeouts) - [Set a Workflow Retry Policy](#workflow-retries) - [Set Activity timeouts](#activity-timeouts) - [Set a custom Activity Retry Policy](#activity-retries) - [Heartbeat an Activity](#activity-heartbeats) - [Set a Heartbeat Timeout](#heartbeat-timeout) ## Workflow timeouts {#workflow-timeouts} **How to set Workflow timeouts using the Java SDK.** Each Workflow timeout controls the maximum duration of a different aspect of a Workflow Execution. Workflow timeouts are set when [starting the Workflow Execution](#workflow-timeouts). Before we continue, we want to note that we generally do not recommend setting Workflow Timeouts, because Workflows are designed to be long-running and resilient. Instead, setting a Timeout can limit its ability to handle unexpected delays or long-running processes. If you need to perform an action inside your Workflow after a specific period of time, we recommend using a Timer. - **[Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout)** - restricts the maximum amount of time that a single Workflow Execution can be executed. - **[Workflow Run Timeout](/encyclopedia/detecting-workflow-failures#workflow-run-timeout):** restricts the maximum amount of time that a single Workflow Run can last. - **[Workflow Task Timeout](/encyclopedia/detecting-workflow-failures#workflow-task-timeout):** restricts the maximum amount of time that a Worker can execute a Workflow Task. Create an instance of [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) in the Client code and set your timeout. Available timeouts are: - [setWorkflowExecutionTimeout()](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html#setWorkflowExecutionTimeout(java.time.Duration)) - [setWorkflowRunTimeout()](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html#setWorkflowRunTimeout(java.time.Duration)) - [setWorkflowTaskTimeout()](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html#setWorkflowTaskTimeout(java.time.Duration)) ```java //create Workflow stub for YourWorkflowInterface YourWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("YourWorkflow") .setTaskQueue(WorkerGreet.TASK_QUEUE) // Set Workflow Timeout duration .setWorkflowExecutionTimeout(Duration.ofSeconds(10)) // .setWorkflowRunTimeout(Duration.ofSeconds(10)) // .setWorkflowTaskTimeout(Duration.ofSeconds(10)) .build()); ``` ### Workflow Retry Policy {#workflow-retries} **How to set a Workflow Retry Policy in Java.** A Retry Policy can work in cooperation with the timeouts to provide fine controls to optimize the execution experience. Use a [Retry Policy](/encyclopedia/retry-policies) to retry a Workflow Execution in the event of a failure. Workflow Executions do not retry by default, and Retry Policies should be used with Workflow Executions only in certain situations. To set a Workflow Retry Options in the [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) instance use [`WorkflowOptions.Builder.setWorkflowRetryOptions`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html). - Type: `RetryOptions` - Default: `Null` which means no retries will be attempted. ```java //create Workflow stub for GreetWorkflowInterface GreetWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("GreetWF") .setTaskQueue(WorkerGreet.TASK_QUEUE) // Set Workflow Retry Options .setRetryOptions(RetryOptions.newBuilder() .build()); ``` ## Activity timeouts {#activity-timeouts} **How to set Activity timeouts using the Java SDK.** Each Activity timeout controls the maximum duration of a different aspect of an Activity Execution. The following timeouts are available in the Activity Options. - **[Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout):** is the maximum amount of time allowed for the overall [Activity Execution](/activity-execution). - **[Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout):** is the maximum time allowed for a single [Activity Task Execution](/tasks#activity-task-execution). - **[Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout):** is the maximum amount of time that is allowed from when an [Activity Task](/tasks#activity-task) is scheduled to when a [Worker](/workers#worker) starts that Activity Task. An Activity Execution must have either the Start-To-Close or the Schedule-To-Close Timeout set. Set your Activity Timeout from the [`ActivityOptions.Builder`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/ActivityOptions.Builder.html) class. Available timeouts are: - ScheduleToCloseTimeout() - ScheduleToStartTimeout() - StartToCloseTimeout() You can set Activity Options using an `ActivityStub` within a Workflow implementation, or per-Activity using `WorkflowImplementationOptions` within a Worker. The following uses `ActivityStub`. ```java GreetingActivities activities = Workflow.newActivityStub(GreetingActivities.class, ActivityOptions.newBuilder() .setScheduleToCloseTimeout(Duration.ofSeconds(5)) // .setStartToCloseTimeout(Duration.ofSeconds(2) // .setScheduletoCloseTimeout(Duration.ofSeconds(20)) .build()); ``` The following uses `WorkflowImplementationOptions`. ```java WorkflowImplementationOptions options = WorkflowImplementationOptions.newBuilder() .setActivityOptions( ImmutableMap.of( "GetCustomerGreeting", // Set Activity Execution timeout ActivityOptions.newBuilder() .setScheduleToCloseTimeout(Duration.ofSeconds(5)) // .setStartToCloseTimeout(Duration.ofSeconds(2)) // .setScheduleToStartTimeout(Duration.ofSeconds(5)) .build())) .build(); ``` :::note If you define options per-Activity Type options with `WorkflowImplementationOptions.setActivityOptions()`, setting them again specifically with `ActivityStub` in a Workflow will override this setting. ::: ### Custom Activity Retry Policy {#activity-retries} **How to set a custom Activity Retry Policy in Java.** A Retry Policy works in cooperation with the timeouts to provide fine controls to optimize the execution experience. Activity Executions are automatically associated with a default [Retry Policy](/encyclopedia/retry-policies) if a custom one is not provided. To set a Retry Policy, known as the [Retry Options](/encyclopedia/retry-policies) in Java, use [`ActivityOptions.newBuilder.setRetryOptions()`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/ActivityOptions.Builder.html). - Type: `RetryOptions` - Default: Server-defined Activity Retry policy. - With `ActivityStub` ```java private final ActivityOptions options = ActivityOptions.newBuilder() // note that either StartToCloseTimeout or ScheduleToCloseTimeout are // required when setting Activity options. .setStartToCloseTimeout(Duration.ofSeconds(5)) .setRetryOptions( RetryOptions.newBuilder() .setInitialInterval(Duration.ofSeconds(1)) .setMaximumInterval(Duration.ofSeconds(10)) .build()) .build(); ``` - With `WorkflowImplementationOptions` ```java WorkflowImplementationOptions options = WorkflowImplementationOptions.newBuilder() .setActivityOptions( ImmutableMap.of( "EmailCustomerGreeting", ActivityOptions.newBuilder() // note that either StartToCloseTimeout or ScheduleToCloseTimeout are // required when setting Activity options. .setStartToCloseTimeout(Duration.ofSeconds(5)) .setRetryOptions( RetryOptions.newBuilder() .setDoNotRetry(NullPointerException.class.getName()) .build()) .build())) .build(); ``` ## Activity next Retry delay {#activity-next-retry-delay} **How to override the next Retry delay following an Activity failure using the Temporal Java SDK** You may throw an [`ApplicationFailure`](/references/failures#application-failure) with the `NextRetryDelay` field set. This value will replace and override whatever the retry interval would be on the retry policy. For example, if in an activity, you want to base the interval on the number of attempts, you might do: ```java int attempt = Activity.getExecutionContext().getInfo().getAttempt(); throw ApplicationFailure.newFailureWithCauseAndDelay( "Something bad happened on attempt " + attempt, "my_failure_type", null, 3 * Duration.ofSeconds(attempt)); ``` ## Heartbeat an Activity {#activity-heartbeats} **How to Heartbeat an Activity using the Java SDK.** An [Activity Heartbeat](/encyclopedia/detecting-activity-failures#activity-heartbeat) is a ping from the [Worker Process](/workers#worker-process) that is executing the Activity to the [Temporal Service](/temporal-service). Each Heartbeat informs the Temporal Service that the [Activity Execution](/activity-execution) is making progress and the Worker has not crashed. If the Temporal Service does not receive a Heartbeat within a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) time period, the Activity will be considered failed and another [Activity Task Execution](/tasks#activity-task-execution) may be scheduled according to the Retry Policy. Heartbeats may not always be sent to the Temporal Service—they may be [throttled](/encyclopedia/detecting-activity-failures#throttling) by the Worker. Activity Cancellations are delivered to Activities from the Temporal Service when they Heartbeat. Activities that don't Heartbeat can't receive a Cancellation. Heartbeat throttling may lead to Cancellation getting delivered later than expected. Heartbeats can contain a `details` field describing the Activity's current progress. If an Activity gets retried, the Activity can access the `details` from the last Heartbeat that was sent to the Temporal Service. To Heartbeat an Activity Execution in Java, use the `Activity.getExecutionContext().heartbeat()` Class method. ```java public class YourActivityDefinitionImpl implements YourActivityDefinition { @Override public String yourActivityMethod(YourActivityMethodParam param) { // ... Activity.getExecutionContext().heartbeat(details); // ... } // ... } ``` The method takes an optional argument, the `details` variable above that represents latest progress of the Activity Execution. This method can take a variety of types such as an exception object, custom object, or string. If the Activity Execution times out, the last Heartbeat `details` are included in the thrown `ActivityTimeoutException`, which can be caught by the calling Workflow. The Workflow can then use the `details` information to pass to the next Activity invocation if needed. In the case of Activity retries, the last Heartbeat's `details` are available and can be extracted from the last failed attempt by using `Activity.getExecutionContext().getHeartbeatDetails(Class detailsClass)` ### Heartbeat Timeout {#heartbeat-timeout} **How to set a Heartbeat Timeout using the Java SDK.** A [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) works in conjunction with [Activity Heartbeats](/encyclopedia/detecting-activity-failures#activity-heartbeat). To set a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout), use [`ActivityOptions.newBuilder.setHeartbeatTimeout`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/activity/ActivityOptions.Builder.html). - Type: `Duration` - Default: None You can set Activity Options using an `ActivityStub` within a Workflow implementation, or per-Activity using `WorkflowImplementationOptions` within a Worker. Note that if you define options per-Activity Type options with `WorkflowImplementationOptions.setActivityOptions()`, setting them again specifically with `ActivityStub` in a Workflow will override this setting. - With `ActivityStub` ```java private final GreetingActivities activities = Workflow.newActivityStub( GreetingActivities.class, ActivityOptions.newBuilder() // note that either StartToCloseTimeout or ScheduleToCloseTimeout are // required when setting Activity options. .setStartToCloseTimeout(Duration.ofSeconds(5)) .setHeartbeatTimeout(Duration.ofSeconds(2)) .build()); ``` - With `WorkflowImplementationOptions` ```java WorkflowImplementationOptions options = WorkflowImplementationOptions.newBuilder() .setActivityOptions( ImmutableMap.of( "EmailCustomerGreeting", ActivityOptions.newBuilder() // note that either StartToCloseTimeout or ScheduleToCloseTimeout are // required when setting Activity options. .setStartToCloseTimeout(Duration.ofSeconds(5)) .setHeartbeatTimeout(Duration.ofSeconds(2)) .build())) .build(); ``` --- ## Java SDK developer guide ![Java SDK Banner](/img/assets/banner-java-temporal.png) :::info JAVA SPECIFIC RESOURCES Build Temporal Applications with the Java SDK. **Temporal Java Technical Resources:** - [Java SDK Quickstart - Setup Guide](https://docs.temporal.io/develop/java/set-up-your-local-java) - [Java API Documentation](https://javadoc.io/doc/io.temporal/temporal-sdk) - [Java SDK Code Samples](https://github.com/temporalio/samples-java) - [Java SDK GitHub](https://github.com/temporalio/sdk-java) - [Temporal 101 in Java Free Course](https://learn.temporal.io/courses/temporal_101/java/) **Get Connected with the Temporal Java Community:** - [Temporal Java Community Slack](https://temporalio.slack.com/archives/CTT84KXK9) - [Java SDK Forum](https://community.temporal.io/tag/java-sdk) ::: ## [Core application](/develop/java/core-application) Use the essential components of a Temporal Application (Workflows, Activities, and Workers) to build and run a Temporal application. - [How to develop a Workflow Definition in Java](/develop/java/core-application#develop-workflows) - [How to develop a basic Activity](/develop/java/core-application#develop-activities) - [How to start an Activity Execution](/develop/java/core-application#activity-execution) - [How to develop a Worker Program in Java](/develop/java/core-application#run-a-dev-worker) ## [Temporal Client](/develop/java/temporal-client) Connect to a Temporal Service and start a Workflow Execution. - [Connect to a development Temporal Service](/develop/java/temporal-client#connect-to-development-service) - [Connect to Temporal Cloud](/develop/java/temporal-client#connect-to-temporal-cloud) - [Start a Workflow Execution](/develop/java/temporal-client#start-workflow-execution) ## [Testing](/develop/java/testing-suite) Set up the testing suite and test Workflows and Activities. - [Test frameworks](/develop/java/testing-suite#test-frameworks) - [Test Activities](/develop/java/testing-suite#test-activities) - [Testing Workflows](/develop/java/testing-suite#test-workflows) - [How to Replay a Workflow Execution](/develop/java/testing-suite#replay) ## [Failure detection](/develop/java/failure-detection) Explore how your application can detect failures using timeouts and automatically attempt to mitigate them with retries. - [Workflow timeouts](/develop/java/failure-detection#workflow-timeouts) - [How to set Activity timeouts](/develop/java/failure-detection#activity-timeouts) - [How to Heartbeat an Activity](/develop/java/failure-detection#activity-heartbeats) ## [Workflow message passing](/develop/java/message-passing) Send messages to and read the state of Workflow Executions. - [How to develop with Signals](/develop/java/message-passing#signals) - [How to develop with Queries](/develop/java/message-passing#queries) - [What is a Dynamic Handler?](/develop/java/message-passing#dynamic-handler) - [How to develop with Updates](/develop/java/message-passing#updates) ## [Asynchronous Activity completion](/develop/java/asynchronous-activity-completion) Complete Activities asynchronously. - [How to asynchronously complete an Activity](/develop/java/asynchronous-activity-completion) ## [Versioning](/develop/java/versioning) Change Workflow Definitions without causing non-deterministic behavior in running Workflows. - [Temporal Java SDK Versioning APIs](/develop/java/versioning#patching) ## [Observability](/develop/java/observability) Configure and use the Temporal Observability APIs. - [How to emit metrics](/develop/java/observability#metrics) - [How to setup Tracing](/develop/java/observability#tracing) - [How to log from a Workflow](/develop/java/observability#logging) - [How to use Visibility APIs](/develop/java/observability#visibility) ## [Debugging](/develop/java/debugging) Explore various ways to debug your application. - [How to debug in a development environment](/develop/java/debugging#debug-in-a-development-environment) - [How to debug in a production environment](/develop/java/debugging#debug-in-a-production-environment) ## [Schedules](/develop/java/schedules) Run Workflows on a schedule and delay the start of a Workflow. - [How to Schedule a Workflow](/develop/java/schedules#schedule-a-workflow) - [How to set a Cron Schedule in Java](/develop/java/schedules#cron-schedule) ## [Data encryption](/develop/java/converters-and-encryption) Use compression, encryption, and other data handling by implementing custom converters and codecs. - [How to use a custom Payload Codec in Java](/develop/java/converters-and-encryption#custom-payload-codec) - [How to use custom Payload conversion](/develop/java/converters-and-encryption#custom-payload-conversion) ## Temporal Nexus The [Temporal Nexus](/develop/java/nexus) feature guide shows how to use Temporal Nexus to connect Durable Executions within and across Namespaces using a Nexus Endpoint, a Nexus Service contract, and Nexus Operations. - [Create a Nexus Endpoint to route requests from caller to handler](/develop/java/nexus#create-nexus-endpoint) - [Define the Nexus Service contract](/develop/java/nexus#define-nexus-service-contract) - [Develop a Nexus Service and Operation handlers](/develop/java/nexus#develop-nexus-service-operation-handlers) - [Develop a caller Workflow that uses a Nexus Service](/develop/java/nexus#develop-caller-workflow-nexus-service) - [Make Nexus calls across Namespaces with a development Server](/develop/java/nexus#nexus-calls-across-namespaces-dev-server) - [Make Nexus calls across Namespaces in Temporal Cloud](/develop/java/nexus#nexus-calls-across-namespaces-temporal-cloud) ## [Interrupt a Workflow feature guide](/develop/java/cancellation) Interrupt a Workflow Execution with a Cancel or Terminate action. - [Cancel a Workflow](/develop/java/cancellation#cancellation) - [Terminate a Workflow](/develop/java/cancellation#termination) - [Reset a Workflow](/develop/java/cancellation#reset): Resume a Workflow Execution from an earlier point in its Event History. - [Cancel an Activity from a Workflow](/develop/java/cancellation#cancel-activity) ## [Child Workflows](/develop/java/child-workflows) Explore how to spawn a Child Workflow Execution and handle Child Workflow Events. - [Start a Child Workflow Execution](/develop/java/child-workflows#start-child-workflow) - [Set a Parent Close Policy](/develop/java/child-workflows#parent-close-policy) ## [Continue-As-New](/develop/java/continue-as-new) Continue the Workflow Execution with a new Workflow Execution using the same Workflow ID. - [Continue a Workflow as New](/develop/java/continue-as-new) ## [Durable Timers](/develop/java/timers) Use Timers to make a Workflow Execution pause or "sleep" for seconds, minutes, days, months, or years. - [What is a Timer?](/develop/java/timers#timers) ## [Side Effects](/develop/java/side-effects) Use Side Effects in Workflows. - [Side Effects](/develop/java/side-effects#side-effects) ## [Enriching the User Interface](/develop/java/enriching-ui) Add descriptive information to workflows and events for better visibility and context in the UI. - [Adding Summary and Details to Workflows](/develop/java/enriching-ui#adding-summary-and-details-to-workflows) ## [Manage Namespaces](/develop/java/namespaces) Create and manage Namespaces. - [Create a Namespace](/develop/java/namespaces#register-namespace) - [Manage Namespaces](/develop/java/namespaces#manage-namespaces) ## [Spring Boot](/develop/java/spring-boot-integration) Use Temporal in your Spring Boot application. - [Spring Boot](/develop/java/spring-boot-integration#setup-dependency) --- ## Workflow message passing - Java SDK A Workflow can act like a stateful web service that receives messages: Queries, Signals, and Updates. The Workflow implementation defines these endpoints via handler methods that can react to incoming messages and return values. Temporal Clients use messages to read Workflow state and control execution. See [Workflow message passing](/encyclopedia/workflow-message-passing) for a general overview of this topic. This page introduces these features for the Temporal Java SDK. ## Write message handlers {#writing-message-handlers} Follow these guidelines when writing your message handlers: - Message handlers are defined as methods on the Workflow class, using one of the three annotations: [`@QueryMethod`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/QueryMethod.html), [`@SignalMethod`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/SignalMethod.html), and [`@UpdateMethod`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/UpdateMethod.html). - The parameters and return values of handlers and the main Workflow function must be [serializable](/dataconversion). - Prefer a single class with multiple fields over using multiple input parameters. A class allows you to add fields without changing the calling signature. ### Query handlers {#queries} A [Query](/sending-messages#sending-queries) is a synchronous operation that retrieves state from a Workflow Execution: ```java public class MessagePassingIntro { public enum Language { CHINESE, ENGLISH, FRENCH, SPANISH, PORTUGUESE, } public static class GetLanguagesInput { public boolean includeUnsupported; public GetLanguagesInput() { this.includeUnsupported = false; } public GetLanguagesInput(boolean includeUnsupported) { this.includeUnsupported = includeUnsupported; } } @WorkflowInterface public interface GreetingWorkflow { ... // 👉 Use the @QueryMethod annotation to define a Query handler in the // Workflow interface. @QueryMethod List getLanguages(GetLanguagesInput input); } public static class GreetingWorkflowImpl implements GreetingWorkflow { ... @Override public List getLanguages(GetLanguagesInput input) { // 👉 The Query handler returns a value: it must not mutate the Workflow state // or perform blocking operations. if (input.includeUnsupported) { return Arrays.asList(Language.values()); } else { return new ArrayList(greetings.keySet()); } } } } ``` - A Query handler must not modify Workflow state. - You can't perform blocking operations such as executing an Activity in a Query handler. - The Query annotation accepts an argument (`name`) as described in the API reference docs for [`@QueryMethod`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/QueryMethod.html). ### Signal handlers {#signals} A [Signal](/sending-messages#sending-signals) is an asynchronous message sent to a running Workflow Execution to change its state and control its flow: ```java public class MessagePassingIntro { public static class ApproveInput { private String name; public ApproveInput() {} public ApproveInput(String name) { this.name = name; } } @WorkflowInterface public interface GreetingWorkflow { ... // 👉 Use the @SignalMethod annotation to define a Signal handler in the // Workflow interface. @SignalMethod void approve(ApproveInput input); } public static class GreetingWorkflowImpl implements GreetingWorkflow { ... private Boolean approvedForRelease; private String approverName; @Override public void approve(ApproveInput input) { // 👉 The Signal handler mutates the Workflow state but cannot return a value. this.approvedForRelease = true; this.approverName = input.name; } } } ``` - The handler should not return a value. The response is sent immediately from the server, without waiting for the Workflow to process the Signal. - The Signal annotation accepts arguments (`name`, and `unfinished_policy`) as described in the API reference docs for [`@SignalMethod`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/SignalMethod.html). - Signal (and Update) handlers can be blocking. This allows you to use Activities, Child Workflows, durable [`Workflow.sleep`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#sleep(java.time.Duration)) Timers, [`Workflow.await`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#await(java.time.Duration,java.util.function.Supplier)), and more. See [Blocking handlers](#blocking-handlers) and [Workflow message passing](/encyclopedia/workflow-message-passing) for guidelines on safely using blocking Signal and Update handlers. ### Update handlers and validators {#updates} An [Update](/sending-messages#sending-updates) is a trackable synchronous request sent to a running Workflow Execution. It can change the Workflow state, control its flow, and return a result. The sender must wait until the Worker accepts or rejects the Update. The sender may wait further to receive a returned value or an exception if something goes wrong: ```java public class MessagePassingIntro { @WorkflowInterface public interface GreetingWorkflow { ... // 👉 Use the @UpdateMethod annotation to define an Update handler in the // Workflow interface. @UpdateMethod Language setLanguage(Language language); // 👉 Update validators are optional @UpdateValidatorMethod(updateName = "setLanguage") void setLanguageValidator(Language language); } public static class GreetingWorkflowImpl implements GreetingWorkflow { ... @Override public Language setLanguage(Language language) { // 👉 The Update handler can mutate the Workflow state and return a value. Language previousLanguage = this.language; this.language = language; return previousLanguage; } @Override public void setLanguageValidator(Language language) { // 👉 The Update validator performs validation but cannot mutate the Workflow state. if (!greetings.containsKey(language)) { throw new IllegalArgumentException("Unsupported language: " + language); } } } } ``` - The Update annotation accepts arguments (`name`, and `unfinished_policy`) as described in the API reference docs for [`@UpdateMethod`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/UpdateMethod.html). - About validators: - Use validators to reject an Update before it is written to History. Validators are always optional. If you don't need to reject Updates, you can skip them. - Define an Update validator with the `@UpdateValidatorMethod` annotation. Use the `updateName` argument when declaring the validator to connect it to its Update. The validator must return `void` and accept the same argument types as the handler. - Accepting and rejecting Updates with validators: - To reject an Update, throw an exception of any type in the validator. - Without a validator, Updates are always accepted. - Validators and Event History: - The `WorkflowExecutionUpdateAccepted` event is written into the History whether the acceptance was automatic or programmatic. - When a Validator throws an error, the Update is rejected, the Update is not run, and `WorkflowExecutionUpdateAccepted` _won't_ be added to the Event History. The caller receives an "Update failed" error. - Use [`getCurrentUpdateInfo`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/internal/sync/WorkflowInternal.html#getCurrentUpdateInfo()) to obtain information about the current Update. This includes the Update ID, which can be useful for deduplication when using Continue-As-New: see [Ensuring your messages are processed exactly once](https://docs.temporal.io/handling-messages#exactly-once-message-processing). - Signal (and Update) handlers can be blocking, letting them use Activities, Child Workflows, durable [`Workflow.sleep`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#sleep(java.time.Duration)) Timers, [`Workflow.await`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#await(java.time.Duration,java.util.function.Supplier)) conditions, and more. See [Blocking handlers](#blocking-handlers) and [Workflow message passing](/encyclopedia/workflow-message-passing) for safe usage guidelines. ## Send messages {#send-messages} To send Queries, Signals, or Updates you call methods on a `WorkflowInterface`, often called the "WorkflowStub." Use [newWorkflowStub](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowClient.html#newWorkflowStub(java.lang.Class,io.temporal.client.WorkflowOptions)) to obtain the WorkflowStub. For example: ```java WorkflowServiceStubs service = WorkflowServiceStubs.newLocalServiceStubs(); WorkflowClient client = WorkflowClient.newInstance(service); WorkflowOptions workflowOptions = WorkflowOptions.newBuilder().setTaskQueue(TASK_QUEUE).setWorkflowId(WORKFLOW_ID).build(); // Create the workflow client stub. It is used to start the workflow execution. GreetingWorkflow workflow = client.newWorkflowStub(GreetingWorkflow.class, workflowOptions); // Start workflow asynchronously and call its getGreeting workflow method WorkflowClient.start(workflow::getGreetings); ``` To check the argument types required when sending messages -- and the return type for Queries and Updates -- refer to the corresponding handler method in the Workflow Definition. :::warning Using Continue-as-New and Updates - Temporal _does not_ support Continue-as-New functionality within Update handlers. - Complete all handlers _before_ using Continue-as-New. - Use Continue-as-New from your main Workflow Definition method, just as you would complete or fail a Workflow Execution. ::: ### Send a Query {#send-query} Call a Query method defined within a Workflow from a `WorkflowStub` created in Client code to send a Query to a Workflow Execution: ```java List languages = workflow.getLanguages(new GetLanguagesInput(false)); System.out.println("Supported languages: " + languages); ``` - Sending a Query doesn’t add events to a Workflow's Event History. - You can send Queries to closed Workflow Executions within a Namespace's Workflow retention period. This includes Workflows that have completed, failed, or timed out. Querying terminated Workflows is not safe and, therefore, not supported. - A Worker must be online and polling the Task Queue to process a Query. ### Send a Signal {#send-signal} You can send a Signal to a Workflow Execution from a Temporal Client or from another Workflow Execution. However, you can only send Signals to Workflow Executions that haven’t closed. #### Send a Signal from a Client {#send-signal-from-client} To send a Signal from Client code, call a Signal method on the WorkflowStub: ```java workflow.approve(new ApproveInput("Me")); ``` - The call returns when the server accepts the Signal; it does _not_ wait for the Signal to be delivered to the Workflow Execution. - The [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the Workflow's Event History. #### Send a Signal from a Workflow {#send-signal-from-workflow} A Workflow can send a Signal to another Workflow, known as an _External Signal_. Use [`Workflow.newExternalWorkflowStub`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#newExternalWorkflowStub(java.lang.Class,io.temporal.api.common.v1.WorkflowExecution)) in your _current_ Workflow to create an `ExternalWorkflowStub` for the other Workflow. Call Signal methods on the external stub to Signal the other Workflow: ```java OtherWorkflow other = Workflow.newExternalWorkflowStub(OtherWorkflow.class, otherWorkflowID); other.mySignalMethod(); ``` When an External Signal is sent: - A [SignalExternalWorkflowExecutionInitiated](/references/events#signalexternalworkflowexecutioninitiated) Event appears in the sender's Event History. - A [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the recipient's Event History. #### Signal-With-Start {#signal-with-start} Signal-With-Start allows a Client to send a Signal to a Workflow Execution, starting the Execution if it is not already running. If there's a Workflow running with the given Workflow Id, it will be signaled. If there isn't, a new Workflow will be started and immediately signaled. To use Signal-With-Start, call `signalWithStart` and pass the name of your Signal with its arguments: ```java public static void signalWithStart() { // WorkflowStub is a client-side stub to a single Workflow instance WorkflowStub untypedWorkflowStub = client.newUntypedWorkflowStub("GreetingWorkflow", WorkflowOptions.newBuilder() .setWorkflowId(workflowId) .setTaskQueue(taskQueue) .build()); untypedWorkflowStub.signalWithStart("setCustomer", new Object[] {customer2}, new Object[] {customer1}); String greeting = untypedWorkflowStub.getResult(String.class); } ``` Here's the `WorkflowInterface` for the previous example. When using Signal-With-Start, the Signal handler (`setCustomer`) will be executed before the Workflow method (`greet`). ```java @WorkflowInterface public interface GreetingWorkflow { @WorkflowMethod String greet(Customer customer); @SignalMethod void setCustomer(Customer customer); @QueryMethod Customer getCustomer(); } ``` ### Send an Update {#send-update-from-client} An Update is a synchronous, blocking call that can change Workflow state, control its flow, and return a result. A Client sending an Update must wait until the Server delivers the Update to a Worker. Workers must be available and responsive. If you need a response as soon as the Server receives the request, use a Signal instead. Also note that you can't send Updates to other Workflow Executions. - `WorkflowExecutionUpdateAccepted` is added to the Event History when the Worker confirms that the Update passed validation. - `WorkflowExecutionUpdateCompleted` is added to the Event History when the Worker confirms that the Update has finished. To send an Update to a Workflow Execution, you can: - Call the Update method on a WorkflowStub in Client code and wait for the Update to complete. This code fetches an Update result: ```java Language previousLanguage = workflow.setLanguage(Language.CHINESE); ``` - Send [`startUpdate`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html#startUpdate(io.temporal.client.UpdateOptions,java.lang.Object...)) to receive an [`WorkflowUpdateHandle`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowUpdateHandle.html) as soon as the Update is accepted or rejected. - Use this `WorkflowUpdateHandle` later to fetch your results. - Blocking Update handlers normally perform long-running asynchronous operations. - `startUpdate` only waits until the Worker has accepted or rejected the Update, not until all asynchronous operations are complete. For example: ```java WorkflowUpdateHandle handle = WorkflowStub.fromTyped(workflow) .startUpdate( "setLanguage", WorkflowUpdateStage.ACCEPTED, Language.class, Language.ENGLISH); previousLanguage = handle.getResultAsync().get(); ``` For more details, see the "Blocking handlers" section. To obtain an Update handle, you can: - Use [`startUpdate`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html#startUpdate(io.temporal.client.UpdateOptions,java.lang.Object...)) to start an Update and return the handle, as shown in the preceding example. - Use [`getUpdateHandle`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html#getUpdateHandle(java.lang.String,java.lang.Class)) to fetch a handle for an in-progress Update using the Update ID and Workflow ID. You can use the `WorkflowUpdateHandle` to obtain information about the update: - `getExecution()`: Returns the Workflow Execution that this Update was sent to. - `getId()`: Returns the Update's unique ID, which can be useful for deduplication when using Continue-As-New: see [Ensuring your messages are processed exactly once](/handling-messages#exactly-once-message-processing). - `getResultAsync()`: Returns a `CompletableFuture` which can be used to wait for the Update to complete. #### Update-With-Start {#update-with-start} :::tip For open source server users, Temporal Server version [Temporal Server version 1.28](https://github.com/temporalio/temporal/releases/tag/v1.28.0) is recommended. ::: [Update-with-Start](/sending-messages#update-with-start) lets you [send an Update](/develop/java/message-passing#send-update-from-client) that checks whether an already-running Workflow with that ID exists: - If the Workflow exists, the Update is processed. - If the Workflow does not exist, a new Workflow Execution is started with the given ID, and the Update is processed before the main Workflow method starts to execute. Use the [`startUpdateWithStart`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowClient.html#startUpdateWithStart(io.temporal.workflow.Functions.Func,io.temporal.client.UpdateOptions,io.temporal.client.WithStartWorkflowOperation)) WorkflowClient API. It returns once the requested Update wait stage has been reached; or when the request times out. Use the [`WorkflowUpdateHandle`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowUpdateHandle.html) to retrieve a result from the Update. You will need to provide: - WorkflowStub created from [`WorkflowOptions`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.html). The `WorkflowOptions` require [Workflow Id Conflict Policy](/workflow-execution/workflowid-runid#workflow-id-conflict-policy) to be specified. Choose "Use Existing" and use an idempotent Update handler to ensure your code can be executed again in case of a Client failure. Not all `WorkflowOptions` are allowed. For example, specifying a Cron Schedule will result in an error. - [`UpdateOptions`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/UpdateOptions.html). Same as for [Update Workflow](/develop/java/message-passing#send-update-from-client), the update wait stage must be specified. For Update-with-Start, the Workflow Id is optional. When specified, the Id must match the one used in `WorkflowOptions`. Since a running Workflow Execution may not already exist, you can't set a Run Id. - [`WithStartWorkflowOperation`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WithStartWorkflowOperation.html). Specify the workflow method. Note that a `WithStartWorkflowOperation` can only be used once. Re-using a previously used operation returns an error from `startUpdateWithStart`. For example: ```java WorkflowUpdateHandle handle = WorkflowClient.startUpdateWithStart( workflow::setLanguage, Language.ENGLISH, UpdateOptions.newBuilder().setWaitForStage(WorkflowUpdateStage.ACCEPTED).build(), new WithStartWorkflowOperation<>(workflow::getGreetings)); Language previousLanguage = handle.getResultAsync().get(); ``` To obtain the update result directly, use the [`executeUpdateWithStart`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowClient.html#executeUpdateWithStart(io.temporal.workflow.Functions.Func,io.temporal.client.UpdateOptions,io.temporal.client.WithStartWorkflowOperation)) WorkflowClient API. It returns once the update result is available; or when the API call times out. The update wait stage on the `UpdateOptions` is optional. When specified, it must be `WorkflowUpdateStage.COMPLETED`. For example: ```java Language previousLanguage = WorkflowClient.executeUpdateWithStart( workflow::setLanguage, Language.ENGLISH, UpdateOptions.newBuilder().build(), new WithStartWorkflowOperation<>(workflow::getGreetings)); ``` For more examples, see the [Java sample for early-return pattern](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/earlyreturn). :::info NON-TYPE SAFE API CALLS In real-world development, sometimes you may be unable to import Workflow Definition method signatures. When you don't have access to the Workflow Definition or it isn't written in Java, you can use these non-type safe APIs to obtain an untyped WorkflowStub: - [`WorkflowClient.newUntypedWorkflowStub`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowClient.html#newUntypedWorkflowStub(java.lang.String,io.temporal.client.WorkflowOptions)) - [`Workflow.newUntypedExternalWorkflowStub`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#newUntypedExternalWorkflowStub(java.lang.String)). Pass method names instead of method objects to: - [`WorkflowStub.query`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html#query(java.lang.String,java.lang.Class,java.lang.Object...)) - [`WorkflowStub.signal`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html#signal(java.lang.String,java.lang.Object...)) - [`WorkflowStub.update`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html#update(java.lang.String,java.lang.Class,java.lang.Object...)) - [`WorkflowStub.startUpdateWithStart`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html#startUpdateWithStart(io.temporal.client.UpdateOptions,java.lang.Object%5B%5D,java.lang.Object%5B%5D)) - [`WorkflowStub.executeUpdateWithStart`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html#executeUpdateWithStart(io.temporal.client.UpdateOptions,java.lang.Object%5B%5D,java.lang.Object%5B%5D)) ::: ## Message handler patterns {#message-handler-patterns} This section covers common write operations, such as Signal and Update handlers. It doesn't apply to pure read operations, like Queries or Update Validators. :::tip For additional information, see [Inject work into the main Workflow](/handling-messages#injecting-work-into-main-workflow), and [Ensuring your messages are processed exactly once](/handling-messages#exactly-once-message-processing). ::: ### Do blocking operations in handlers {#blocking-handlers} Signal and Update handlers can block. This allows you to use [`Workflow.await`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#await(java.time.Duration,java.util.function.Supplier)), Activities, Child Workflows, [`Workflow.sleep`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#sleep(java.time.Duration)) Timers, etc. This expands the possibilities for what can be done by a handler but it also means that handler executions and your main Workflow method are all running concurrently, with switching occurring between them at await calls. It's essential to understand the things that could go wrong in order to use blocking handlers safely. See [Workflow message passing](/encyclopedia/workflow-message-passing) for guidance on safe usage of blocking Signal and Update handlers, and the [Controlling handler concurrency](#control-handler-concurrency) and [Waiting for message handlers to finish](#wait-for-message-handlers) sections below. The following code modifies the Update handler from earlier on in this page. The Update handler now makes a blocking call to execute an Activity: ```java public static class GreetingWorkflowImpl implements GreetingWorkflow { ... @Override public Language setLanguage(Language language) { if (!greetings.containsKey(language)) { String greeting = activity.greetingService(language); if (greeting == null) { // 👉 An update validator cannot be blocking, so cannot be used to check that the remote // greetingService supports the requested language. Throwing an ApplicationFailure // will fail the Update, but the WorkflowExecutionUpdateAccepted event will still be // added to history. throw ApplicationFailure.newFailure("Greeting service does not support: " + language, "GreetingFailure") } greetings.put(language, greeting); } Language previousLanguage = this.language; this.language = language; return previousLanguage; } } ``` Although a Signal handler can also make blocking calls like this, using an Update handler allows the Client to receive a result or error once the Activity completes. This lets your Client track the progress of asynchronous work performed by the Update's Activities, Child Workflows, etc. ### Add blocking wait conditions {#block-with-wait} Sometimes, blocking Signal or Update handlers need to meet certain conditions before they should continue. You can use [`Workflow.await`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#await(java.time.Duration,java.util.function.Supplier)) to prevent the code from proceeding until a condition is true. You specify the condition by passing a function that returns `true` or `false`. This is an important feature that helps you control your handler logic. Here are two important use cases for `Workflow.await`: - Waiting in a handler until it is appropriate to continue. - Waiting in the main Workflow until all active handlers have finished. #### Wait for conditions in handlers {#wait-in-handlers} It's common to use `Workflow.await` in a handler. For example, suppose your Workflow class has a `updateReadyToExecute` method that indicates whether your Update handler should be allowed to start executing. You can use `workflow.wait_condition` in the handler to make the handler pause until the condition is met: ```java @Override public String setLanguage(UpdateInput input) { Workflow.await(() -> this.updateReadyToExecute(input)); ... } ``` Remember: handlers can execute before the main Workflow method starts. You can also use `Workflow.await` anywhere else in the handler to wait for a specific condition to become true. This allows you to write handlers that pause at multiple points, each time waiting for a required condition to become true. #### Ensure your handlers finish before the Workflow completes {#wait-for-message-handlers} `Workflow.await` can ensure your handler completes before a Workflow finishes. When your Workflow uses blocking Signal or Update handlers, your main Workflow method can return or Continue-as-New while a handler is still waiting on an async task, such as an Activity. The Workflow completing may interrupt the handler before it finishes crucial work and cause Client errors when trying to retrieve Update results. Use `Workflow.await` to wait for [`Workflow.isEveryHandlerFinished`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#isEveryHandlerFinished()) to return `true` to address this problem and allow your Workflow to end smoothly: ```java public class MyWorkflowImpl implements MyWorkflow { ... @Override public String run() { ... Workflow.await(() -> Workflow.isEveryHandlerFinished()); return "workflow-result"; } } ``` By default, your Worker will log a warning when you allow a Workflow Execution to finish with unfinished handler executions. You can silence these warnings on a per-handler basis by passing the `unfinishedPolicy` argument to the [`@SignalMethod`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/SignalMethod.html) / [`@UpdateMethod`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/UpdateMethod.html) annotation: ```java @WorkflowInterface public interface MyWorkflow { ... @UpdateMethod(unfinishedPolicy = HandlerUnfinishedPolicy.ABANDON) void myUpdate(); } ``` See [Finishing handlers before the Workflow completes](/handling-messages#finishing-message-handlers) for more information. ### Use `@WorkflowInit` to operate on Workflow input before any handler executes Normally, your Workflows constructor won't have any parameters. However, if you use the `@WorkflowInit` annotation on your constructor, you can give it the same [Workflow parameters](/develop/java/core-application#workflow-parameters) as your `@WorkflowMethod`. The SDK will then ensure that your constructor receives the Workflow input arguments that the [Client sent](/develop/java/temporal-client#start-workflow-execution). The Workflow input arguments are also passed to your `@WorkflowMethod` method -- that always happens, whether or not you use the `@WorkflowInit` annotation. This is useful if you have message handlers that need access to Workflow input: see [Initializing the Workflow first](/handling-messages#workflow-initializers). :::caution Do not make blocking calls from within your `@WorkflowInit` method. This could result in your Workflow being incompletely initialized at the start, meaning, for example, that Signal, Query, and Update handler registration would be delayed. ::: Here's an example. Notice that the constructor and `getGreeting` must have the same parameters: ```java public class GreetingExample { @WorkflowInterface public interface GreetingWorkflow { @WorkflowMethod String getGreeting(String input); @UpdateMethod boolean checkTitleValidity(); } public static class GreetingWorkflowImpl implements GreetingWorkflow { private final String nameWithTitle; private boolean titleHasBeenChecked; ... // Note the annotation is on a public constructor @WorkflowInit public GreetingWorkflowImpl(String input) { this.nameWithTitle = "Sir " + input; this.titleHasBeenChecked = false; } @Override public String getGreeting(String input) { Workflow.await(() -> titleHasBeenChecked) return "Hello " + nameWithTitle; } @Override public boolean checkTitleValidity() { // 👉 The handler is now guaranteed to see the workflow input // after it has been processed by the constructor. boolean isValid = activity.checkTitleValidity(nameWithTitle); titleHasBeenChecked = true; return isValid; } } } ``` ### Use locks to prevent concurrent handler execution {#control-handler-concurrency} Concurrent processes can interact in unpredictable ways. Incorrectly written [concurrent message-passing](/handling-messages#message-handler-concurrency) code may not work correctly when multiple handler instances run simultaneously. Here's an example of a pathological case: ```java public class DataWorkflowImpl implements DataWorkflow { ... @Override public void badSignalHandler() { Data data = activity.fetchData(); this.x = data.x; // 🐛🐛 Bug!! If multiple instances of this method are executing concurrently, then // there may be times when the Workflow has self.x from one Activity execution and self.y from another. Workflow.sleep(Duration.ofSeconds(1)); this.y = data.y; } } ``` Coordinating access with `WorkflowLock` corrects this code. Locking makes sure that only one handler instance can execute a specific section of code at any given time: ```java public class DataWorkflowImpl implements DataWorkflow { WorkflowLock lock = Workflow.newWorkflowLock(); ... @Override public void safeSignalHandler() { try { lock.lock(); Data data = activity.fetchData(); this.x = data.x; // ✅ OK: the scheduler may switch now to a different handler execution, // or to the main workflow method, but no other execution of this handler // can run until this execution finishes. Workflow.sleep(Duration.ofSeconds(1)); this.y = data.y; } finally { lock.unlock() } } } ``` ## Message handler troubleshooting {#message-handler-troubleshooting} When sending a Signal, Update, or Query to a Workflow, your Client might encounter the following errors: - **The Client can't contact the server**: You'll receive a [`WorkflowServiceException`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowServiceException.html) on which the `cause` is a [`StatusRuntimeException`](https://grpc.github.io/grpc-java/javadoc/io/grpc/StatusRuntimeException.html) and `status` of `UNAVAILABLE` (after some retries). - **The Workflow does not exist**: You'll receive a [`WorkflowNotFoundException`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowNotFoundException.html). See [Exceptions in message handlers](/handling-messages#exceptions) for a non–Java-specific discussion of this topic. ### Problems when sending a Signal {#signal-problems} When using Signal, the above `WorkflowException`s are the only types of exception that will result from the request. In contrast, for Queries and Updates, the client waits for a response from the Worker. If an issue occurs during the handler execution by the Worker, the Client may receive an exception. ### Problems when sending an Update {#update-problems} When working with Updates, you may encounter these errors: - **No Workflow Workers are polling the Task Queue**: Your request will be retried by the SDK Client indefinitely. You can impose a timeout with `CompletableFuture.get()` method with a timeout parameter. This throws a `java.util.concurrent.TimeoutException` exception when it expires. - **Update failed**: You'll receive a `WorkflowUpdateException` exception. There are two ways this can happen: - The Update was rejected by an Update validator defined in the Workflow alongside the Update handler. - The Update failed after having been accepted. Update failures are like [Workflow failures](/references/failures). Issues that cause a Workflow failure in the main method also cause Update failures in the Update handler. These might include: - A failed Child Workflow - A failed Activity (if the Activity retries have been set to a finite number) - The Workflow author throwing `ApplicationFailure` - Any error listed in [getFailWorkflowExceptionTypes](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/worker/WorkflowImplementationOptions.html#getFailWorkflowExceptionTypes()) (empty by default) - **The handler caused the Workflow Task to fail**: A [Workflow Task Failure](/references/failures) causes the server to retry Workflow Tasks indefinitely. What happens to your Update request depends on its stage: - If the request hasn't been accepted by the server, you receive a `FAILED_PRECONDITION` [`WorkflowServiceException`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowServiceException.html) exception. - If the request has been accepted, it is durable. Once the Workflow is healthy again after a code deploy, use an `WorkflowUpdateHandle`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowUpdateHandle.html) to fetch the Update result. - **The Workflow finished while the Update handler execution was in progress**: You'll receive a [`WorkflowServiceException`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowServiceException.html) "workflow execution already completed"`. This will happen if the Workflow finished while the Update handler execution was in progress, for example because - The Workflow was canceled or failed. - The Workflow completed normally or continued-as-new and the Workflow author did not [wait for handlers to be finished](/handling-messages#finishing-message-handlers). ### Problems when sending a Query {#query-problems} When working with Queries, you may encounter these errors: - **There is no Workflow Worker polling the Task Queue**: You'll receive a [`WorkflowServiceException`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowServiceException.html) on which the `cause` is a [`StatusRuntimeException`](https://grpc.github.io/grpc-java/javadoc/io/grpc/StatusRuntimeException.html) with a `status` of `FAILED_PRECONDITION`. - **Query failed**: You'll receive a [`WorkflowQueryException`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowQueryException.html) exception if something goes wrong during a Query. Any exception in a Query handler will trigger this error. This differs from Signal and Update requests, where exceptions can lead to Workflow Task Failure instead. - **The handler caused the Workflow Task to fail.** This would happen, for example, if the Query handler blocks the thread for too long without yielding. ## Dynamic components {#dynamic-handler} A dynamic Workflow, Activity, Signal, Update, or Query is a kind of unnamed item. Normally, these items are registered by name with the Worker and invoked at runtime. When an unregistered or unrecognized Workflow, Activity, or message request arrives with a recognized method signature, the Worker can use a pre-registered dynamic stand-in. For example, you might send a request to start a Workflow named "MyUnknownWorkflow". After receiving a Workflow Task, the Worker may find that there's no registered Workflow Definitions of that type. It then checks to see if there's a registered dynamic Workflow. If the dynamic Workflow signature matches the incoming Workflow signature, the Worker invokes that just as it would invoke a non-dynamic statically named version. By registering dynamic versions of your Temporal components, the Worker can fall back to these alternate implementations for name mismatches. :::caution Use dynamic elements judiciously and as a fallback mechanism, not a primary design. They can introduce long-term maintainability and debugging issues. Reserve dynamic invocation use for cases where a name is not or can't be known at compile time. ::: ### Set a Dynamic Workflow {#set-a-dynamic-workflow} Use [`DynamicWorkflow`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/DynamicWorkflow.html) to implement Workflow Types dynamically. Register a Workflow implementation type that extends `DynamicWorkflow` to implement any Workflow Type that is not explicitly registered with the Worker. The dynamic Workflow interface is implemented with the `execute` method. This method takes in `EncodedValues` that are inputs to the Workflow Execution. These inputs can be specified by the Client when invoking the Workflow Execution. ```java public class MyDynamicWorkflow implements DynamicWorkflow { @Override public Object execute(EncodedValues args) { } } ``` ### How to set a Dynamic Activity {#set-a-dynamic-activity} To handle Activity types that do not have an explicitly registered handler, you can directly implement a dynamic Activity. Use `DynamicActivity` to implement any number of Activity types dynamically. When an Activity implementation that extends `DynamicActivity` is registered, it is called for any Activity type invocation that doesn't have an explicitly registered handler. The dynamic Activity interface is implemented with the `execute` method, as shown in the following example. ```java // Dynamic Activity implementation public static class DynamicGreetingActivityImpl implements DynamicActivity { @Override public Object execute(EncodedValues args) { String activityType = Activity.getExecutionContext().getInfo().getActivityType(); return activityType + ": " + args.get(0, String.class) + " " + args.get(1, String.class) + " from: " + args.get(2, String.class); } } ``` Use `Activity.getExecutionContext()` to get information about the Activity type that should be implemented dynamically. ### How to set a Dynamic Signal {#set-a-dynamic-signal} You can also implement Signal handlers dynamically. This is useful for library-level code and implementation of DSLs. Use `Workflow.registerListener(Object)` to register an implementation of the `DynamicSignalListener` in the Workflow implementation code. ```java Workflow.registerListener( (DynamicSignalHandler) (signalName, encodedArgs) -> name = encodedArgs.get(0, String.class)); ``` When registered, any Signals sent to the Workflow without a defined handler will be delivered to the `DynamicSignalHandler`. Note that you can only register one `Workflow.registerListener(Object)` per Workflow Execution. `DynamicSignalHandler` can be implemented in both regular and dynamic Workflow implementations. ### How to set a Dynamic Query {#set-a-dynamic-query} You can also implement Query handlers dynamically. This is useful for library-level code and implementation of DSLs. Use `Workflow.registerListener(Object)` to register an implementation of the `DynamicQueryListener` in the Workflow implementation code. ```java Workflow.registerListener( (DynamicQueryHandler) (queryName, encodedArgs) -> name = encodedArgs.get(0, String.class)); ``` When registered, any Queries sent to the Workflow without a defined handler will be delivered to the `DynamicQueryHandler`. Note that you can only register one `Workflow.registerListener(Object)` per Workflow Execution. `DynamicQueryHandler` can be implemented in both regular and dynamic Workflow implementations. ### How to set a Dynamic Update {#set-a-dynamic-update} You can also implement Update handlers dynamically. This is useful for library-level code and implementation of DSLs. ```java Workflow.registerListener( (DynamicUpdateHandler) (updateName, encodedArgs) -> encodedArgs.get(0, String.class)); ``` When registered, any Updates sent to the Workflow without a defined handler will be delivered to the `DynamicUpdateHandler`. You can only register one `Workflow.registerListener(Object)` per Workflow Execution. `DynamicUpdateHandler` can be implemented in both regular and dynamic Workflow implementations. --- ## Namespaces - Java SDK This page shows how to do the following: - [Register a Namespace](#register-namespace) - [Manage Namespaces](#manage-namespaces) You can create, update, deprecate or delete your [Namespaces](/namespaces) using either the Temporal CLI or SDK APIs. Use Namespaces to isolate your Workflow Executions according to your needs. For example, you can use Namespaces to match the development lifecycle by having separate `dev` and `prod` Namespaces. You could also use them to ensure Workflow Executions between different teams never communicate - such as ensuring that the `teamA` Namespace never impacts the `teamB` Namespace. On Temporal Cloud, use the [Temporal Cloud UI](/cloud/namespaces#create-a-namespace) to create and manage a Namespace from the UI, or [tcld commands](https://docs.temporal.io/cloud/tcld/namespace/) to manage Namespaces from the command-line interface. On self-hosted Temporal Service, you can register and manage your Namespaces using the Temporal CLI (recommended) or programmatically using APIs. Note that these APIs and Temporal CLI commands will not work with Temporal Cloud. Use a custom [Authorizer](/self-hosted-guide/security#authorizer-plugin) on your Frontend Service in the Temporal Service to set restrictions on who can create, update, or deprecate Namespaces. You must register a Namespace with the Temporal Service before setting it in the Temporal Client. ## Register a Namespace {#register-namespace} **How to register a Namespace using the Java SDK.** Registering a Namespace creates a Namespace on the Temporal Service or Temporal Cloud. On Temporal Cloud, use the [Temporal Cloud UI](/cloud/namespaces#create-a-namespace) or [tcld commands](https://docs.temporal.io/cloud/tcld/namespace/) to create Namespaces. On self-hosted Temporal Service, you can register your Namespaces using the Temporal CLI (recommended) or programmatically using APIs. Note that these APIs and Temporal CLI commands will not work with Temporal Cloud. Use a custom [Authorizer](/self-hosted-guide/security#authorizer-plugin) on your Frontend Service in the Temporal Service to set restrictions on who can create, update, or deprecate Namespaces. Use the [`RegisterNamespace` API](https://github.com/temporalio/api/blob/f0350f8032ad2f0c60c539b3b61ea37f412f1cf7/temporal/api/workflowservice/v1/service.proto) to register a [Namespace](/namespaces) and set the [Retention Period](/temporal-service/temporal-server#retention-period) for the Workflow Execution Event History for the Namespace. ```java //... //... public static void createNamespace(String name) { RegisterNamespaceRequest req = RegisterNamespaceRequest.newBuilder() .setNamespace("your-custom-namespace") .setWorkflowExecutionRetentionPeriod(Durations.fromDays(3)) // keeps the Workflow Execution //Event History for up to 3 days in the Persistence store. Not setting this value will throw an error. .build(); service.blockingStub().registerNamespace(req); } //... ``` The Retention Period setting using `WorkflowExecutionRetentionPeriod` is mandatory. The minimum value you can set for this period is 1 day. Once registered, set Namespace using `WorkflowClientOptions` within a Workflow Client to run your Workflow Executions within that Namespace. See [Connect to a Development Temporal Service](/develop/java/temporal-client#connect-to-development-service) for details. Note that Namespace registration using this API takes up to 10 seconds to complete. Ensure that you wait for this registration to complete before starting the Workflow Execution against the Namespace. To update your Namespace use the [UpdateNamespace API](#manage-namespaces) with the NamespaceClient. ## Manage Namespaces {#manage-namespaces} **How to manage Namespaces using the Java SDK.** You can get details for your Namespaces, update Namespace configuration, and deprecate or delete your Namespaces. On Temporal Cloud, use the [Temporal Cloud UI](/cloud/namespaces#create-a-namespace) or [tcld commands](https://docs.temporal.io/cloud/tcld/namespace/) to manage Namespaces. On self-hosted Temporal Service, you can manage your registered Namespaces using the Temporal CLI (recommended) or programmatically using APIs. Note that these APIs and Temporal CLI commands will not work with Temporal Cloud. Use a custom [Authorizer](/self-hosted-guide/security#authorizer-plugin) on your Frontend Service in the Temporal Service to set restrictions on who can create, update, or deprecate Namespaces. You must register a Namespace with the Temporal Service before setting it in the Temporal Client. On Temporal Cloud, use the [Temporal Cloud UI](/cloud/namespaces) or [tcld commands](https://docs.temporal.io/cloud/tcld/namespace/) to manage Namespaces. On self-hosted Temporal Service, you can manage your registered Namespaces using the Temporal CLI (recommended) or programmatically using APIs. Note that these APIs and Temporal CLI commands will not work with Temporal Cloud. - Update information and configuration for a registered Namespace on your Temporal Service: - With the Temporal CLI: [`temporal operator namespace update`](/cli/operator#update) Example - Use the [`UpdateNamespace` API](https://github.com/temporalio/api/blob/e5cf521c6fdc71c69353f3d2ac5506dd6e827af8/temporal/api/workflowservice/v1/service.proto) to update configuration on a Namespace. Example ```java //... UpdateNamespaceRequest updateNamespaceRequest = UpdateNamespaceRequest.newBuilder() .setNamespace("your-namespace-name") //the namespace that you want to update .setUpdateInfo(UpdateNamespaceInfo.newBuilder() //has options to update namespace info .setDescription("your updated namespace description") //updates description in the namespace info. .build()) .setConfig(NamespaceConfig.newBuilder() //has options to update namespace configuration .setWorkflowExecutionRetentionTtl(Durations.fromHours(30)) //updates the retention period for the namespace "your-namespace--name" to 30 hrs. .build()) .build(); UpdateNamespaceResponse updateNamespaceResponse = namespaceservice.blockingStub().updateNamespace(updateNamespaceRequest); //... ``` - Get details for a registered Namespace on your Temporal Service: - With the Temporal CLI: [`temporal operator namespace describe`](/cli/operator#describe) - Use the [`DescribeNamespace` API](https://github.com/temporalio/api/blob/e5cf521c6fdc71c69353f3d2ac5506dd6e827af8/temporal/api/workflowservice/v1/service.proto) to return information and configuration details for a registered Namespace. Example ```java //... DescribeNamespaceRequest descNamespace = DescribeNamespaceRequest.newBuilder() .setNamespace("your-namespace-name") //specify the namespace you want details for .build(); DescribeNamespaceResponse describeNamespaceResponse = namespaceservice.blockingStub().describeNamespace(descNamespace); System.out.println("Namespace Description: " + describeNamespaceResponse); //... ``` - Get details for all registered Namespaces on your Temporal Service: - With the Temporal CLI: [`temporal operator namespace list`](/cli/operator#list) - Use the [`ListNamespace` API](https://github.com/temporalio/api/blob/e5cf521c6fdc71c69353f3d2ac5506dd6e827af8/temporal/api/workflowservice/v1/service.proto) to return information and configuration details for all registered Namespaces on your Temporal Service. Example ```java //... ListNamespacesRequest listNamespaces = ListNamespacesRequest.newBuilder().build(); ListNamespacesResponse listNamespacesResponse = namespaceservice.blockingStub().listNamespaces(listNamespaces); //lists 1-100 namespaces (1 page) in the active Temporal Service. To list all, set the page size or loop until NextPageToken is nil. //... ``` - Deprecate a Namespace: The [`DeprecateNamespace` API](https://github.com/temporalio/api/blob/e5cf521c6fdc71c69353f3d2ac5506dd6e827af8/temporal/api/workflowservice/v1/service.proto) updates the state of a registered Namespace to "DEPRECATED". Once a Namespace is deprecated, you cannot start new Workflow Executions on it. All existing and running Workflow Executions on a deprecated Namespace will continue to run. Example: ```java //... DeprecateNamespaceRequest deprecateNamespace = DeprecateNamespaceRequest.newBuilder() .setNamespace("your-namespace-name") //specify the namespace that you want to deprecate .build(); DeprecateNamespaceResponse response = namespaceservice.blockingStub().deprecateNamespace(deprecateNamespace); //... ``` - Delete a Namespace: The [`DeleteNamespace` API](https://github.com/temporalio/api/blob/e5cf521c6fdc71c69353f3d2ac5506dd6e827af8/temporal/api/workflowservice/v1/service.proto) deletes a Namespace. Deleting a Namespace deletes all running and completed Workflow Executions on the Namespace, and removes them from the persistence store and the visibility store. Example: ```java //... DeleteNamespaceResponse res = OperatorServiceStubs.newServiceStubs(OperatorServiceStubsOptions.newBuilder() .setChannel(service.getRawChannel()) .validateAndBuildWithDefaults()) .blockingStub() .deleteNamespace(DeleteNamespaceRequest.newBuilder().setNamespace("your-namespace-name").build()); //... ``` --- ## Observability - Java SDK The observability section of the Temporal Developer's guide covers the many ways to view the current state of your [Temporal Application](/temporal#temporal-application)—that is, ways to view which [Workflow Executions](/workflow-execution) are tracked by the [Temporal Platform](/temporal#temporal-platform) and the state of any specified Workflow Execution, either currently or at points of an execution. This section covers features related to viewing the state of the application, including: - [Emit metrics](#metrics) - [Set up tracing](#tracing) - [Log from a Workflow](#logging) - [Visibility APIs](#visibility) ## Emit metrics {#metrics} Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. For a complete list of metrics capable of being emitted, see the [SDK metrics reference](/references/sdk-metrics). - For an overview of Prometheus and Grafana integration, refer to the [Monitoring](/self-hosted-guide/monitoring) guide. - For a list of metrics, see the [SDK metrics reference](/references/sdk-metrics). - For an end-to-end example that exposes metrics with the Java SDK, refer to the [samples-java](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/metrics) repo. To emit metrics with the Java SDK, use the[`MicrometerClientStatsReporter`](https://github.com/temporalio/sdk-java/blob/55ee7894aec427d7e384c3519732bdd61119961a/src/main/java/io/temporal/common/reporter/MicrometerClientStatsReporter.java#L34) class to integrate with Micrometer MeterRegistry configured for your metrics backend. [Micrometer](https://micrometer.io/docs) is a popular Java framework that provides integration with Prometheus and other backends. The following example shows how to use `MicrometerClientStatsReporter` to define the metrics scope and set it with the `WorkflowServiceStubsOptions`. ```java //... // see the Micrometer documentation for configuration details on other supported monitoring systems. // in this example shows how to set up Prometheus registry and stats reported. PrometheusMeterRegistry registry = new PrometheusMeterRegistry(PrometheusConfig.DEFAULT); StatsReporter reporter = new MicrometerClientStatsReporter(registry); // set up a new scope, report every 10 seconds Scope scope = new RootScopeBuilder() .reporter(reporter) .reportEvery(com.uber.m3.util.Duration.ofSeconds(10)); // for Prometheus collection, expose a scrape endpoint. //... // add metrics scope to WorkflowServiceStub options WorkflowServiceStubsOptions stubOptions = WorkflowServiceStubsOptions.newBuilder().setMetricsScope(scope).build(); //... ``` For more details, see the [Java SDK Samples](https://github.com/temporalio/samples-java/tree/637c2e66fd2dab43d9f3f39e5fd9c55e4f3884f0/core/src/main/java/io/temporal/samples/metrics). For details on configuring a Prometheus scrape endpoint with Micrometer, see the [Micrometer Prometheus Configuring](https://docs.micrometer.io/micrometer/reference/implementations/prometheus.html#_configuring) documentation. ## Set up tracing {#tracing} Tracing allows you to view the call graph of a Workflow along with its Activities, Nexus Operations, and any Child Workflows. Temporal Web's tracing capabilities mainly track Activity Execution within a Temporal context. If you need custom tracing specific for your use case, you should make use of context propagation to add tracing logic accordingly. To configure tracing in Java, register the `OpenTracingClientInterceptor()` interceptor. You can register the interceptors on both the Temporal Client side and the Worker side. The following code examples demonstrate the `OpenTracingClientInterceptor()` on the Temporal Client. ```java WorkflowClientOptions.newBuilder() //... .setInterceptors(new OpenTracingClientInterceptor()) .build(); ``` ```java WorkflowClientOptions clientOptions = WorkflowClientOptions.newBuilder() .setInterceptors(new OpenTracingClientInterceptor(JaegerUtils.getJaegerOptions(type))) .build(); WorkflowClient client = WorkflowClient.newInstance(service, clientOptions); ``` The following code examples demonstrate the `OpenTracingClientInterceptor()` on the Worker. ```java WorkerFactoryOptions.newBuilder() //... .setWorkerInterceptors(new OpenTracingWorkerInterceptor()) .build(); ``` ```java WorkerFactoryOptions factoryOptions = WorkerFactoryOptions.newBuilder() .setWorkerInterceptors( new OpenTracingWorkerInterceptor(JaegerUtils.getJaegerOptions(type))) .build(); WorkerFactory factory = WorkerFactory.newInstance(client, factoryOptions); ``` For more information, see the Temporal [OpenTracing module](https://github.com/temporalio/sdk-java/blob/master/temporal-opentracing/README.md). ### Context Propagation Over Nexus Operation Calls Nexus does not use the standard context propagator header structure. Instead, it relies on a Temporal-agnostic protocol designed to connect arbitrary systems. To propagate context over Nexus Operation calls, the context is serialized into a `Map`. This map is special as it will normalize all keys to lowercase. Because Nexus uses this custom format, and because Nexus calls may involve external systems, the `ContextPropagator` interface doesn’t apply to Nexus headers. Context must be explicitly propagated through interceptors, as shown in the [Nexus Context Propagation sample](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/nexuscontextpropagation). ## Log from a Workflow {#logging} Logging enables you to record critical information during code execution. Loggers create an audit trail and capture information about your Workflow's operation. An appropriate logging level depends on your specific needs. During development or troubleshooting, you might use debug or even trace. In production, you might use info or warn to avoid excessive log volume. The logger supports the following logging levels: | Level | Use | | ------- | --------------------------------------------------------------------------------------------------------- | | `TRACE` | The most detailed level of logging, used for very fine-grained information. | | `DEBUG` | Detailed information, typically useful for debugging purposes. | | `INFO` | General information about the application's operation. | | `WARN` | Indicates potentially harmful situations or minor issues that don't prevent the application from working. | | `ERROR` | Indicates error conditions that might still allow the application to continue running. | The Temporal SDK core normally uses `WARN` as its default logging level. To get a standard `slf4j` logger in your Workflow code, use the [`Workflow.getLogger`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html) method. ```java private static final Logger logger = Workflow.getLogger(DynamicDslWorkflow.class); ``` Logs in replay mode are omitted unless the [`WorkerFactoryOptions.Builder.setEnableLoggingInReplay(boolean)`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/worker/WorkerFactoryOptions.Builder.html#setEnableLoggingInReplay(boolean)) method is set to true. ### How to provide a custom logger {#custom-logger} Use a custom logger for logging. To set a custom logger, supply your own logging implementation and configuration details the same way you would in any other Java application. ## Visibility APIs {#visibility} The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Temporal Service. ### How to use Search Attributes {#search-attributes} The typical method of retrieving a Workflow Execution is by its Workflow Id. However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments. You can do this with [Search Attributes](/search-attribute). - [Default Search Attributes](/search-attribute#default-search-attribute) like `WorkflowType`, `StartTime` and `ExecutionStatus` are automatically added to Workflow Executions. - _Custom Search Attributes_ can contain their own domain-specific data (like `customerId` or `numItems`). - A few [generic Custom Search Attributes](/search-attribute#custom-search-attribute) like `CustomKeywordField` and `CustomIntField` are created by default in Temporal's [Docker Compose](https://github.com/temporalio/docker-compose). The steps to using custom Search Attributes are: - Create a new Search Attribute in your Temporal Service using `temporal operator search-attribute create` or the Cloud UI. - Set the value of the Search Attribute for a Workflow Execution: - On the Client by including it as an option when starting the Execution. - In the Workflow by calling `upsertTypedSearchAttributes`. - Read the value of the Search Attribute: - On the Client by calling `DescribeWorkflow`. - In the Workflow by looking at `WorkflowInfo`. - Query Workflow Executions by the Search Attribute using a [List Filter](/list-filter): - [In the Temporal CLI](/cli/workflow#list). - In code by calling `ListWorkflowExecutions`. ### How to set custom Search Attributes {#custom-search-attributes} After you've created custom Search Attributes in your Temporal Service (using `temporal operator search-attribute create` or the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow. When starting a Workflow Execution with your Client, include the Custom Search Attribute in the options using `WorkflowOptions.newBuilder().setTypedSearchAttributes()`: ```java // In a shared constants file, so all files have access public static final SearchAttributeKey IS_ORDER_FAILED = SearchAttributeKey.forBoolean("isOrderFailed"); ... // In main WorkflowOptions options = WorkflowOptions.newBuilder() .setWorkflowId(workflowID) .setTaskQueue(Constants.TASK_QUEUE_NAME) .setTypedSearchAttributes(generateSearchAttributes()) .build(); PizzaWorkflow workflow = client.newWorkflowStub(PizzaWorkflow.class, options); ... // Further down in the file private static Map generateSearchAttributes(){ return SearchAttributes.newBuilder().set(Constants.IS_ORDER_FAILED, false).build(); } ``` Each `SearchAttribute` object represents a custom attribute name, and the value is a [`SearchAttributeKey`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/common/SearchAttributeKey.html#forBoolean(java.lang.String)) representing a specific type. Currently the following types are supported: - Boolean - Double - Long - KeyWord - KeyWordList - Text In this example `isOrderFailed` is set as a Search Attribute. This attribute is useful for querying Workflows based the success/failure of customer orders. ### How to upsert Search Attributes {#upsert-search-attributes} Within the Workflow code, you can dynamically add or update Search Attributes using [`upsertTypedSearchAttributes`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#upsertTypedSearchAttributes(io.temporal.common.SearchAttributeUpdate...)). This method is particularly useful for Workflows whose attributes need to change based on internal logic or external events. ```java ... // Existing Workflow Logic Map searchAttribute = new HashMap<>(); Distance distance; try { distance = activities.getDistance(address); searchAttribute.put("isOrderFailed", false); Workflow.upsertTypedSearchAttributes(Constants.IS_ORDER_FAILED.valueSet(false)); } catch (NullPointerException e) { searchAttribute.put("isOrderFailed", true); Workflow.upsertTypedSearchAttributes(Constants.IS_ORDER_FAILED.valueSet(true)); throw new NullPointerException("Unable to get distance"); } ``` ### How to remove a Search Attribute from a Workflow {#remove-search-attribute} To remove a Search Attribute that was previously set, set it to an empty Map. ```java // In a shared constants file, so all files have access public static final SearchAttributeKey IS_ORDER_FAILED = SearchAttributeKey.forBoolean("isOrderFailed"); ... Workflow.upsertTypedSearchAttributes(Constants.IS_ORDER_FAILED.valueUnset()); ``` --- ## Schedules - Java SDK This page shows how to do the following: - [How to Schedule a Workflow](#schedule-a-workflow) - [How to create a Schedule in Java](#create-schedule) - [How to backfill a Schedule in Java](#backfill-schedule) - [How to delete a Schedule in Java](#delete-schedule) - [How to describe a Schedule in Java](#describe-schedule) - [How to list a Schedule in Java](#list-schedule) - [How to pause a Schedule in Java](#pause-schedule) - [How to trigger a Schedule in Java](#trigger-schedule) - [How to update a Schedule in Java](#update-schedule) - [How to set a Cron Schedule in Java](#cron-schedule) - [Start Delay](#start-delay) ## How to Schedule a Workflow {#schedule-a-workflow} Scheduling Workflows is a crucial aspect of any automation process, especially when dealing with time-sensitive tasks. By scheduling a Workflow, you can automate repetitive tasks, reduce the need for manual intervention, and ensure timely execution of your business processes Use any of the following action to help Schedule a Workflow Execution and take control over your automation process. ### How to create a Schedule in Java {#create-schedule} The create action enables you to create a new Schedule. When you create a new Schedule, a unique Schedule ID is generated, which you can use to reference the Schedule in other Schedule commands. To create a Scheduled Workflow Execution in Java, use the `createSchedule()` method on the `ScheduleClient`. Schedules must be initialized with a Schedule ID, ```java Schedule schedule = Schedule.newBuilder() .setAction( ScheduleActionStartWorkflow.newBuilder() .setWorkflowType(HelloSchedules.GreetingWorkflow.class) .setArguments("World") .setOptions( WorkflowOptions.newBuilder() .setWorkflowId("WorkflowId") .setTaskQueue("TaskQueue") .build()) .build()) .setSpec(ScheduleSpec.newBuilder().build()) .build(); // Create a schedule on the server ScheduleHandle handle = scheduleClient.createSchedule("ScheduleId", schedule, ScheduleOptions.newBuilder().build()); ``` :::tip Schedule Auto-Deletion Once a Schedule has completed creating all its Workflow Executions, the Temporal Service deletes it since it won’t fire again. The Temporal Service doesn't guarantee when this removal will happen. ::: ### How to backfill a Schedule in Java {#backfill-schedule} The backfill action executes Actions ahead of their specified time range. This command is useful when you need to execute a missed or delayed Action, or when you want to test the Workflow before its scheduled time. To Backfill a Scheduled Workflow Execution in Java, use the `backfill()` method on the `ScheduleHandle`. ```java ScheduleHandle handle = client.getHandle("schedule-id") Instant now = Instant.now(); handle.backfill( Arrays.asList( new ScheduleBackfill(now.minusMillis(5500), now.minusMillis(2500)), new ScheduleBackfill(now.minusMillis(2500), now))); ``` ### How to delete a Schedule in Java {#delete-schedule} The delete action enables you to delete a Schedule. When you delete a Schedule, it does not affect any Workflows that were started by the Schedule. To delete a Scheduled Workflow Execution in Java, use the `delete()` method on the `Schedule Handle`. ```java ScheduleHandle handle = client.getHandle("schedule-id") handle.delete(); ``` ### How to describe a Schedule in Java {#describe-schedule} The describe action shows the current Schedule configuration, including information about past, current, and future Workflow Runs. This command is helpful when you want to get a detailed view of the Schedule and its associated Workflow Runs. To describe a Scheduled Workflow Execution in Java, use the `describe()` method on the `ScheduleHandle`. ```java ScheduleHandle handle = client.getHandle("schedule-id") ScheduleDescription description = handle.describe(); ``` ### How to list a Schedule in Java {#list-schedule} The list action lists all the available Schedules. This command is useful when you want to view a list of all the Schedules and their respective Schedule IDs. To list all schedules, use the `listSchedules()` asynchronous method on the `ScheduleClient`. If a schedule is added or deleted, it may not be available in the list immediately. ```java Stream scheduleStream = client.listSchedules(); ``` ### How to pause a Schedule in Java {#pause-schedule} The pause action enables you to pause and unpause a Schedule. When you pause a Schedule, all the future Workflow Runs associated with the Schedule are temporarily stopped. This command is useful when you want to temporarily halt a Workflow due to maintenance or any other reason. To pause a Scheduled Workflow Execution in Java, use the `pause()` method on the `ScheduleHandle`. You can pass a `note` to the `pause()` method to provide a reason for pausing the schedule. ```java ScheduleHandle handle = client.getHandle("schedule-id") handle.pause("Pausing the schedule for now"); ``` ### How to trigger a Schedule in Java {#trigger-schedule} The trigger action triggers an immediate action with a given Schedule. By default, this action is subject to the Overlap Policy of the Schedule. This command is helpful when you want to execute a Workflow outside of its scheduled time. To trigger a Scheduled Workflow Execution in Java, use the `trigger()` method on the `ScheduleHandle`. ```java ScheduleHandle handle = client.getHandle("schedule-id") handle.trigger(); ``` ### How to update a Schedule in Java {#update-schedule} The update action enables you to update an existing Schedule. This command is useful when you need to modify the Schedule's configuration, such as changing the start time, end time, or interval. Create a function that takes `ScheduleUpdateInput` and returns `ScheduleUpdate`. To update a Schedule, use a callback to build the update from the description. The following example updates the Schedule to set a limited number of actions. ```java ScheduleHandle handle = client.getHandle("schedule-id") handle.update( (ScheduleUpdateInput input) -> { Schedule.Builder builder = Schedule.newBuilder(input.getDescription().getSchedule()); // Make the schedule paused to demonstrate how to unpause a schedule builder.setState( ScheduleState.newBuilder() .setLimitedAction(true) .setRemainingActions(10) .build()); return new ScheduleUpdate(builder.build()); }); ``` ## How to set a Cron Schedule in Java {#cron-schedule} :::caution Cron support is not recommended We recommend using [Schedules](https://docs.temporal.io/schedule) instead of Cron Jobs. Schedules were built to provide a better developer experience, including more configuration options and the ability to update or pause running Schedules. ::: A [Temporal Cron Job](/cron-job) is the series of Workflow Executions that occur when a Cron Schedule is provided in the call to spawn a Workflow Execution. A Cron Schedule is provided as an option when the call to spawn a Workflow Execution is made. Set the Cron Schedule with the [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) instance in the Client code using [`WorkflowOptions.Builder.setCronSchedule`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html). Setting `setCronSchedule` changes the Workflow Execution into a Temporal Cron Job. The default timezone for a Cron is UTC. - Type: `String` - Default: None ```java //create Workflow stub for YourWorkflowInterface YourWorkflowInterface workflow1 = YourWorker.yourclient.newWorkflowStub( YourWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("YourWF") .setTaskQueue(YourWorker.TASK_QUEUE) // Set Cron Schedule .setCronSchedule("* * * * *") .build()); ``` Temporal Workflow Schedule Cron strings follow this format: ``` ┌───────────── minute (0 - 59) │ ┌───────────── hour (0 - 23) │ │ ┌───────────── day of the month (1 - 31) │ │ │ ┌───────────── month (1 - 12) │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday) │ │ │ │ │ * * * * * ``` For more details, see the [Cron Sample](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/hello/HelloCron.java) ## Start Delay {#start-delay} **How to delay the start of a Workflow Execution using Start Delay with the Temporal Java SDK.** Use the `StartDelay` to schedule a Workflow Execution at a specific one-time future point rather than on a recurring schedule. Create an instance of [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) in the Client code and set `StartDelay` using `setStartDelay`. ```java //create Workflow stub for YourWorkflowInterface YourWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("YourWorkflow") .setTaskQueue(WorkerGreet.TASK_QUEUE) // Start the workflow in 12 hours .setStartDelay(Duration.ofHours(12)) .build()); ``` --- ## Set up your local with the Java SDK --- # Quickstart Configure your local development environment to get started developing with Temporal. java -version }> ## Install the Java JDK Make sure you have the Java JDK installed. You can either download a copy directly from Oracle or select an OpenJDK distribution from your preferred vendor. You'll also need either Maven or Gradle installed. **If you don't have Maven:** [Download](https://maven.apache.org/download.cgi) and [install](https://maven.apache.org/install.html) from Apache.org, or use Homebrew: `brew install maven`. **If you don't have Gradle:** [Download](https://gradle.org/install/) from Gradle.org, use [IntelliJ IDEA](https://www.jetbrains.com/idea/) (bundled), or use Homebrew: `brew install gradle`. mkdir temporal-java-project cd temporal-java-project mvn archetype:generate -DgroupId=helloworkflow -DartifactId=temporal-hello-world -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false cd temporal-hello-world mkdir temporal-hello-world cd temporal-hello-world gradle init --type java-application --project-name temporal-hello-world --package helloworkflow }> ## Create a Project Now that you have your build tool installed, create a project to manage your dependencies and build your Temporal application. Choose your build tool to create the appropriate project structure. For Maven, this creates a standard project with the necessary directories and a basic pom.xml file. For Gradle, this creates a project with build.gradle and the standard Gradle directory structure. {` io.temporal temporal-sdk 1.24.1 io.temporal temporal-testing 1.24.1 test `} {`plugins { id 'application' } repositories { mavenCentral() } dependencies { implementation 'io.temporal:temporal-sdk:1.24.1' testImplementation 'io.temporal:temporal-testing:1.24.1' } application { // Define the main class for the application mainClass = 'helloworkflow.Starter' } // Helper tasks to run the worker and the starter tasks.register('runWorker', JavaExec) { group = 'application' description = 'Run the Temporal worker' classpath = sourceSets.main.runtimeClasspath mainClass = 'helloworkflow.SayHelloWorker' } tasks.register('runStarter', JavaExec) { group = 'application' description = 'Run the workflow starter' classpath = sourceSets.main.runtimeClasspath mainClass = 'helloworkflow.Starter' }`} ./gradlew build }> ## Add Temporal Java SDK Dependencies Now add the Temporal SDK dependencies to your project configuration file. For Maven, add the following dependencies to your `pom.xml` file. For Gradle, add the following lines to your `build.gradle` file. Next, you'll configure a local Temporal Service for development. Install the Temporal CLI using Homebrew: brew install temporal Download the Temporal CLI archive for your architecture: Windows amd64 Windows arm64 Extract it and add temporal.exe to your PATH. Download the Temporal CLI for your architecture: Linux amd64 Linux arm64 Extract the archive and move the temporal binary into your PATH, for example: sudo mv temporal /usr/local/bin }> ## Install Temporal CLI and start the development server The fastest way to get a development version of the Temporal Service running on your local machine is to use [Temporal CLI](https://docs.temporal.io/cli). Choose your operating system to install Temporal CLI: After installing, open a new Terminal. Keep this running in the background: temporal server start-dev Change the Web UI port The Temporal Web UI may be on a different port in some examples or tutorials. To change the port for the Web UI, use the --ui-port option when starting the server: temporal server start-dev --ui-port 8080 The Temporal Web UI will now be available at http://localhost:8080. }> ## Start the development server Once you've installed Temporal CLI and added it to your PATH, open a new Terminal window and run the following command. This command starts a local Temporal Service. It starts the Web UI, creates the default Namespace, and uses an in-memory database. The Temporal Service will be available on localhost:7233. The Temporal Web UI will be available at http://localhost:8233. Leave the local Temporal Service running as you work through tutorials and other projects. You can stop the Temporal Service at any time by pressing CTRL+C. Once you have everything installed, you're ready to build apps with Temporal on your local machine. ## Run Hello World: Test Your Installation Now let's verify your setup is working by creating and running a complete Temporal application with both a Workflow and Activity. This test will confirm that: - The Temporal Java SDK is properly installed - Your local Temporal Service is running - You can successfully create and execute Workflows and Activities - The communication between components is functioning correctly ### 1. Create the Activity Interface Create an Activity interface file (GreetActivities.java): _Note that all files for this quickstart will be created under src/main/java/helloworkflow._ ```java package helloworkflow; @ActivityInterface public interface GreetActivities { @ActivityMethod String greet(String name); } ``` An Activity is a method that executes a single, well-defined action (either short or long running), which often involve interacting with the outside world, such as sending emails, making network requests, writing to a database, or calling an API, which are prone to failure. If an Activity fails, Temporal automatically retries it based on your configuration. You define Activities in Java as an annotated interface, and its implementation. ### 2. Create the Activity Implementation Create an Activity implementation file (GreetActivitiesImpl.java): ```java package helloworkflow; public class GreetActivitiesImpl implements GreetActivities { @Override public String greet(String name) { return "Hello " + name; } } ``` ### 3. Create the Workflow Create a Workflow file (SayHelloWorkflow.java): ```java package helloworkflow; @WorkflowInterface public interface SayHelloWorkflow { @WorkflowMethod String sayHello(String name); } ``` Workflows orchestrate Activities and contain the application logic. Temporal Workflows are resilient. They can run and keep running for years, even if the underlying infrastructure fails. If the application itself crashes, Temporal will automatically recreate its pre-failure state so it can continue right where it left off. You define Workflows in Java as an annotated interface, and its implementation. ### 4. Create the Workflow Implementation Create a Workflow implementation file (SayHelloWorkflowImpl.java): ```java package helloworkflow; public class SayHelloWorkflowImpl implements SayHelloWorkflow { private final GreetActivities activities = Workflow.newActivityStub( GreetActivities.class, ActivityOptions.newBuilder() .setStartToCloseTimeout(Duration.ofSeconds(5)) .build() ); @Override public String sayHello(String name) { return activities.greet(name); } } ``` ### 5. Create and Run the Worker Create a Worker file (SayHelloWorker.java): ```java package helloworkflow; public class SayHelloWorker { public static void main(String[] args) { WorkflowServiceStubs service = WorkflowServiceStubs.newLocalServiceStubs(); WorkflowClient client = WorkflowClient.newInstance(service); WorkerFactory factory = WorkerFactory.newInstance(client); Worker worker = factory.newWorker("my-task-queue"); worker.registerWorkflowImplementationTypes(SayHelloWorkflowImpl.class); worker.registerActivitiesImplementations(new GreetActivitiesImpl()); System.out.println("Starting SayHelloWorker for task queue 'my-task-queue'..."); factory.start(); } } ``` With your Activity and Workflow defined, you need a Worker to execute them. Open a new terminal and run the Worker: ```bash cd temporal-hello-world mvn compile exec:java -Dexec.mainClass="helloworkflow.SayHelloWorker" ``` ```bash ./gradlew runWorker ``` A Worker polls a Task Queue, that you configure it to poll, looking for work to do. Once the Worker dequeues a Workflow or Activity task from the Task Queue, it then executes that task. Workers are a crucial part of your Temporal application as they're what actually execute the tasks defined in your Workflows and Activities. For more information on Workers, see [Understanding Temporal](/evaluate/understanding-temporal#workers) and a [deep dive into Workers](/workers). ### 6. Execute the Workflow Now that your Worker is running, it's time to start a Workflow Execution. This final step will validate that everything is working correctly with your file labeled `Starter.java`. Create a separate file called `Starter.java`: ```java package helloworkflow; public class Starter { public static void main(String[] args) { WorkflowServiceStubs service = WorkflowServiceStubs.newLocalServiceStubs(); WorkflowClient client = WorkflowClient.newInstance(service); SayHelloWorkflow workflow = client.newWorkflowStub( SayHelloWorkflow.class, WorkflowOptions.newBuilder() .setTaskQueue("my-task-queue") .setWorkflowId("say-hello-workflow-id") .build() ); String result = workflow.sayHello("Temporal"); System.out.println("Workflow result: " + result); } } ``` While your worker is still running, open a new terminal and run: ```bash cd temporal-hello-world mvn compile exec:java -Dexec.mainClass="helloworkflow.Starter" ``` ```bash ./gradlew runStarter ``` ### Verify Success If everything is working correctly, you should see: - Worker processing the workflow and activity - Output: `Workflow result: Hello Temporal` - Workflow Execution details in the [Temporal Web UI](http://localhost:8233) Next: Run your first Temporal Application Create a basic Workflow and run it with the Temporal Java SDK --- ## Side Effects - Java SDK ## Side Effects {#side-effects} Side Effects are used to execute non-deterministic code, such as generating a UUID or a random number, without compromising determinism in the Workflow. This is done by storing the non-deterministic results of the Side Effect into the Workflow [Event History](/workflow-execution/event#event-history). A Side Effect does not re-execute during a Replay. Instead, it returns the recorded result from the Workflow Execution Event History. Side Effects should not fail. An exception that is thrown from the Side Effect causes failure and retry of the current Workflow Task. An Activity or a Local Activity may also be used instead of a Side effect, as its result is also persisted in Workflow Execution History. :::note You shouldn't modify the Workflow state inside a Side Effect function, because it is not reexecuted during Replay. Side Effect function should be used to return a value. ::: To use a Side Effect in Java, set the [`sideEffect()`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#sideEffect(java.lang.Class,io.temporal.workflow.Functions.Func)) function in your Workflow Execution and return the non-deterministic code. ```java int random = Workflow.sideEffect(Integer.class, () -> random.nextInt(100)); if random < 50 { .... } else { .... } ``` Here's another example that uses `sideEffect()`. ```java // implementation of the @WorkflowMethod public void execute() { int randomInt = Workflow.sideEffect( int.class, () -> { Random random = new SecureRandom(); return random.nextInt(); }); String userHome = Workflow.sideEffect(String.class, () -> System.getenv("USER_HOME")); if(randomInt % 2 == 0) { // ... } else { // ... } } ``` Java also provides a deterministic method to generate random numbers or random UUIDs. To generate random numbers in a deterministic method, use [`newRandom()`](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#newRandom). ```java // implementation of the @WorkflowMethod public void execute() { int randomInt = Workflow.newRandom().nextInt(); // ... } ``` To generate a random UUID in a deterministic method, use [`randomUUID()`](https://www.javadoc.io/static/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#newRandom()). ```java // implementation of the @WorkflowMethod public void execute() { String randomUUID = Workflow.randomUUID().toString(); // ... } ``` --- ## Spring Boot Integration - Java SDK This guide introduces the [Temporal Spring Boot](https://central.sonatype.com/artifact/io.temporal/temporal-spring-boot-starter?smo=true) integration. The Temporal Spring Boot integration is the easiest way to get started using the Temporal Java SDK if you are a current [Spring](https://spring.io/) user. This section includes the following topics: - [Setup Dependency](#setup-dependency) - [Connect to your Temporal Service](#connect) - [Configure Workers](#configure-workers) - [Customize Options](#customize-options) - [Interceptors](#interceptors) - [Integrations](#integrations) - [Testing](#testing) ## Setup Dependency {#setup-dependency} To start using the Temporal Spring Boot integration, you need to add [`io.temporal:temporal-spring-boot-starter`](https://search.maven.org/artifact/io.temporal/temporal-spring-boot-starter) as a dependency to your Spring project: :::note Temporal's Spring Boot integration currently supports Spring 2.x and 3.x ::: **[Apache Maven](https://maven.apache.org/):** ```maven io.temporal temporal-spring-boot-starter 1.31.0 ``` **[Gradle Groovy DSL](https://gradle.org/):** ```groovy implementation ("io.temporal:temporal-spring-boot-starter:1.31.0") ``` ## Connect {#connect} See the [Temporal Client documentation](/develop/java/temporal-client) for more information about connecting to a Temporal Service. To create an autoconfigured `WorkflowClient`, you need to specify some connection details in your `application.yml` file, as described in the next section. ### Connect to your local Temporal Service ```yaml spring.temporal: connection: target: local # you can specify a host:port here for a remote connection ``` This is enough to autowire a `WorkflowClient` in your Spring Boot application: ```java @SpringBootApplication class App { @Autowire private WorkflowClient workflowClient; } ``` ### Connect to a custom Namespace You can also connect to a custom Namespace by specifying the `spring.temporal.namespace` property. ```yaml spring.temporal: connection: target: local # you can specify a host:port here for a remote connection namespace: # you can specify a custom namespace that you are using ``` ## Connect to Temporal Cloud {#connect} You can also connect to Temporal Cloud, using either an API key or mTLS for authentication. See the [Connect to Temporal Cloud](/develop/java/temporal-client#connect-to-temporal-cloud) section for more information about connecting to Temporal Cloud. ### Using an API key ```yaml spring.temporal: connection: target: apiKey: namespace: ``` ### Using mTLS ``` spring.temporal: connection: mtls: target: key-file: /path/to/key.key cert-chain-file: /path/to/cert.pem # If you use PKCS12 (.pkcs12, .pfx or .p12), you don't need to set it because the certificates chain is bundled into the key file namespace: ``` ## Configure Workers {#configure-workers} Temporal's Spring Boot integration supports two configuration methods for Workers: explicit configuration and auto-discovery. ### Explicit configuration ```yaml spring.temporal: workers: - task-queue: your-task-queue-name name: your-worker-name # unique name of the Worker. If not specified, Task Queue is used as the Worker name. workflow-classes: - your.package.YourWorkflowImpl activity-beans: - activity-bean-name1 ``` ### Auto Discovery Auto Discovery allows you to skip specifying Workflow classes, Activity beans, and Nexus Service beans explicitly in the config by referencing Worker Task Queue names or Worker Names on Workflow, Activity implementations, and Nexus Service implementations. Auto-discovery is applied after and on top of an explicit configuration. ``` spring.temporal: workers-auto-discovery: packages: - your.package # enumerate all the packages that contain your workflow implementations. ``` #### What is auto-discovered: - Workflow implementation classes annotated with `io.temporal.spring.boot.WorkflowImpl` - Activity beans present Spring context whose implementations are annotated with `io.temporal.spring.boot.ActivityImpl` - Nexus Service beans present in Spring context whose implementations are annotated with `io.temporal.spring.boot.NexusServiceImpl` - Workers if a Task Queue is referenced by the annotations but not explicitly configured. Default configuration will be used. :::note `io.temporal.spring.boot.ActivityImpl` and `io.temporal.spring.boot.NexusServiceImpl` should be applied to beans, one way to do this is to annotate your Activity implementation class with `@Component` ::: ``` @Component @ActivityImpl(workers = "myWorker") public class MyActivityImpl implements MyActivity { @Override public String execute(String input) { return input; } } ``` :::note Auto-discovered Workflow implementation classes, Activity beans, and Nexus Service beans will be registered with the configured Workers if not already registered. ::: ## Interceptors {#interceptors} To enable Interceptors, you can create beans by implementing the `io.temporal.common.interceptors.WorkflowClientInterceptor`, `io.temporal.common.interceptors.ScheduleClientInterceptor`, or `io.temporal.common.interceptors.WorkerInterceptor` interface. Interceptors will be registered in the order specified by the `@Order` annotation. ## Integrations {#integrations} The Temporal Spring Boot integration also has built in support for various tools in the Spring ecosystem, such as metrics and tracing. ### Metrics You can set up built-in Spring Boot metrics using [Spring Boot Actuator](https://docs.spring.io/spring-boot/reference/actuator/metrics.html). The Temporal Spring Boot integration will pick up the `MeterRegistry` bean and use it to report Temporal metrics. Alternatively, you can define a custom `io.micrometer.core.instrument.MeterRegistry` bean in the application context. ### Tracing You can set up [Spring Cloud Sleuth](https://spring.io/projects/spring-cloud-sleuth) with an OpenTelemetry export. The Temporal Spring Boot integration will pick up the OpenTelemetry bean configured by `spring-cloud-sleuth-otel-autoconfigure` and use it for Temporal traces. Alternatively, you can define a custom `io.opentelemetry.api.OpenTelemetry` for OpenTelemetry or `io.opentracing.Tracer` for an OpenTracing bean in the application context. ## Customization of Options {#customize-options} To programmatically customize the various options that are created by the Spring Boot integration, you can create beans that implement the `io.temporal.spring.boot.TemporalOptionsCustomizer` interface. This will be called after the options in your properties files are applied. Where `OptionsType` may be one of: * `WorkflowServiceStubsOptions.Builder` * `WorkflowClientOptions.Builder` * `WorkerFactoryOptions.Builder` * `WorkerOptions.Builder` * `WorkflowImplementationOptions.Builder` * `TestEnvironmentOptions.Builder` `io.temporal.spring.boot.WorkerOptionsCustomizer` may be used instead of `TemporalOptionsCustomizer` if `WorkerOptions` needs to be customized on the Task Queue or Worker name. `io.temporal.spring.boot.WorkflowImplementationOptionsCustomizer` may be used instead of `TemporalOptionsCustomizer` if `WorkflowImplementationOptions` needs to be customized on Workflow Type. ## Testing {#testing} The Temporal Spring Boot integration also has easy support for testing your Temporal code. Add the following to your `application.yml` to reconfigure the client work through `io.temporal.testing.TestWorkflowEnvironment` that uses in-memory Java Test Server: ``` spring.temporal: test-server: enabled: true ``` When `spring.temporal.test-server.enabled:true` is added, the `spring.temporal.connection` section is ignored. This allows wiring the `TestWorkflowEnvironment` bean in your unit tests: ``` @SpringBootTest(classes = Test.Configuration.class) @TestInstance(TestInstance.Lifecycle.PER_CLASS) public class Test { @Autowired ConfigurableApplicationContext applicationContext; @Autowired TestWorkflowEnvironment testWorkflowEnvironment; @Autowired WorkflowClient workflowClient; @BeforeEach void setUp() { applicationContext.start(); } @Test @Timeout(value = 10) public void test() { # ... } @ComponentScan # to discover Activity beans annotated with @Component public static class Configuration {} } ``` See the [Java SDK test frameworks documentation](/develop/java/testing-suite#test-frameworks) for more information about testing. --- ## Temporal Client - Java SDK A [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) enables you to communicate with the [Temporal Service](/temporal-service). Communication with a Temporal Service lets you perform actions such as starting Workflow Executions, sending Signals to Workflow Executions, sending Queries to Workflow Executions, getting the results of a Workflow Execution, and providing Activity Task Tokens. This page shows you how to do the following using the Java SDK with the Temporal Client: - [Connect to a local development Temporal Service](#connect-to-development-service) - [Connect to Temporal Cloud](#connect-to-temporal-cloud) - [Start a Workflow Execution](#start-workflow-execution) - [Get Workflow results](#get-workflow-results) :::caution A Temporal Client cannot be initialized and used inside a Workflow. However, it is acceptable and common to use a Temporal Client inside an Activity to communicate with a Temporal Service. ::: ## Connect to a development Temporal Service {#connect-to-development-service} Use the `newLocalServiceStubs` method to create a stub that points to the Temporal development service, and then use the [`WorkflowClient.newInstance` method]() to create a Temporal Client. [sample-apps/java/client/devserver-client-sample/src/main/java/clientsample/YourCallerApp.java](https://github.com/temporalio/documentation/blob/main/sample-apps/java/client/devserver-client-sample/src/main/java/clientsample/YourCallerApp.java) ```java {3,7} // ... // Create an instance that connects to a Temporal Service running on the local // machine, using the default port (7233) WorkflowServiceStubs serviceStub = WorkflowServiceStubs.newLocalServiceStubs(); // Initialize the Temporal Client // This application uses the Client to communicate with the local Temporal Service WorkflowClient client = WorkflowClient.newInstance(serviceStub); ``` When you create a new Client with an instance of `newLocalServiceStubs`, the Client connects to the default local port at port 7233. When you don't specify a custom Namespace, the Client connects to the `default` Namespace. To connect to a custom Namespace, use the `WorkflowClientOptions.Builder.setNamespace` method to set the Namespace. Then pass the `clientOptions` to the `WorkflowClient.newInstance` method. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the configuration file path, the SDK looks for it at the path `~/.config/temporalio/temporal.toml`. For a list of all available configuration options, refer to [Environment Configuration](/references/client-environment-configuration) :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines two profiles: `default` and `prod`. Each profile has its own set of connection options. ```toml --- # Default profile for local development [profile.default] address = "localhost:7233" namespace = "default" --- # Custom gRPC headers [profile.default.grpc_meta] my-custom-header = "development-value" trace-id = "dev-trace-123" --- # Production profile for Temporal Cloud [profile.prod] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" --- # TLS configuration for production [profile.prod.tls] --- # TLS is auto-enabled when this TLS config or API key is present, but you can configure it explicitly --- # Use certificate files for mTLS client_cert_path = "/etc/temporal/certs/client.pem" client_key_path = "/etc/temporal/certs/client.key" --- # Custom headers for production [profile.prod.grpc_meta] environment = "production" service-version = "v1.2.3" ``` You can create a Temporal Client using a specific profile from the configuration file as follows. First use `ClientConfigProfile.load` to load the profile from the configuration file. Then use `profile.toWorkflowServiceStubsOptions` and `profile.toWorkflowClientOptions` to convert the profile to `WorkflowServiceStubsOptions` and `WorkflowClientOptions` respectively. Then use `WorkflowClient.newInstance` to create a Temporal Client. ```java {21-25,32-34} public class LoadFromFile { private static final Logger logger = LoggerFactory.getLogger(LoadFromFile.class); public static void main(String[] args) { try { String configFilePath = Paths.get(LoadFromFile.class.getResource("/config.toml").toURI()).toString(); ClientConfigProfile profile = ClientConfigProfile.load( LoadClientConfigProfileOptions.newBuilder() .setConfigFilePath(configFilePath) .build()); WorkflowServiceStubsOptions serviceStubsOptions = profile.toWorkflowServiceStubsOptions(); WorkflowClientOptions clientOptions = profile.toWorkflowClientOptions(); try { // Create the workflow client using the loaded configuration WorkflowClient client = WorkflowClient.newInstance( WorkflowServiceStubs.newServiceStubs(serviceStubsOptions), clientOptions); // Test the connection by getting system info var systemInfo = client .getWorkflowServiceStubs() .blockingStub() .getSystemInfo( io.temporal.api.workflowservice.v1.GetSystemInfoRequest.getDefaultInstance()); logger.info("✅ Client connected successfully!"); logger.info(" Server version: {}", systemInfo.getServerVersion()); } catch (Exception e) { logger.error("❌ Failed to connect: {}", e.getMessage()); } } catch (Exception e) { logger.error("Failed to load configuration: {}", e.getMessage(), e); System.exit(1); } } } ``` Use the `envconfig` package to set connection options for the Temporal Client using environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](/references/client-environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. ```java {18-19,26-28} public class LoadFromFile { private static final Logger logger = LoggerFactory.getLogger(LoadFromFile.class); public static void main(String[] args) { try { ClientConfigProfile profile = ClientConfigProfile.load(LoadClientConfigProfileOptions.newBuilder().build()); WorkflowServiceStubsOptions serviceStubsOptions = profile.toWorkflowServiceStubsOptions(); WorkflowClientOptions clientOptions = profile.toWorkflowClientOptions(); try { // Create the workflow client using the loaded configuration WorkflowClient client = WorkflowClient.newInstance( WorkflowServiceStubs.newServiceStubs(serviceStubsOptions), clientOptions); // Test the connection by getting system info var systemInfo = client .getWorkflowServiceStubs() .blockingStub() .getSystemInfo( io.temporal.api.workflowservice.v1.GetSystemInfoRequest.getDefaultInstance()); logger.info("✅ Client connected successfully!"); logger.info(" Server version: {}", systemInfo.getServerVersion()); } catch (Exception e) { logger.error("❌ Failed to connect: {}", e.getMessage()); } } catch (Exception e) { logger.error("Failed to load configuration: {}", e.getMessage(), e); System.exit(1); } } } ``` If you don't want to use environment variables or a configuration file, you can specify connection options directly in code. This is convenient for local development and testing. You can also load a base configuration from environment variables or a configuration file, and then override specific options in code. [sample-apps/java/client/devserver-namespace-client-sample/src/main/java/clientsample/YourCallerApp.java](https://github.com/temporalio/documentation/blob/main/sample-apps/java/client/devserver-namespace-client-sample/src/main/java/clientsample/YourCallerApp.java) ```java // ... // Add the Namespace as a Client Option WorkflowClientOptions clientOptions = WorkflowClientOptions .newBuilder() .setNamespace(namespace) .build(); // Initialize the Temporal Client // This application uses the Client to communicate with the Temporal Service WorkflowClient client = WorkflowClient.newInstance(service, clientOptions); ``` ## Connect to Temporal Cloud {#connect-to-temporal-cloud} You can connect to Temporal Cloud using either an [API key](/cloud/api-keys) or through [mTLS](/cloud/certificates). Connection to Temporal Cloud or any secured Temporal Service requires additional connection options compared to connecting to an unsecured local development instance: - Your credentials for authentication. - If you are using an API key, provide the API key value. - If you are using mTLS, provide the mTLS CA certificate and mTLS private key. - Your _Namespace and Account ID_ combination, which follows the format `.`. - The _endpoint_ may vary. The most common endpoint used is the gRPC regional endpoint, which follows the format: `..api.temporal.io:7233`. - For Namespaces with High Availability features with API key authentication enabled, use the gRPC Namespace endpoint: `..tmprl.cloud:7233`. This allows automated failover without needing to switch endpoints. You can find the Namespace and Account ID, as well as the endpoint, on the Namespaces tab: ![The Namespace and Account ID combination on the left, and the regional endpoint on the right](/img/cloud/apikeys/namespaces-and-regional-endpoints.png) You can provide these connection options using environment variables, a configuration file, or directly in code. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. For a list of all available configuration options you can set in the TOML file, refer to [Environment Configuration](/references/client-environment-configuration). You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the path to the configuration file, the SDK looks for it at the default path `~/.config/temporalio/temporal.toml`. :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines a `cloud` profile with the necessary connection options to connect to Temporal Cloud via an API key: ```toml --- # Cloud profile for Temporal Cloud [profile.cloud] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" ``` If you want to use mTLS authentication instead of an API key, replace the `api_key` field with your mTLS certificate and private key: ```toml --- # Cloud profile for Temporal Cloud [profile.cloud] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" tls_client_cert_data = "your-tls-client-cert-data" tls_client_key_path = "your-tls-client-key-path" ``` With the connections options defined in the configuration file, use the `LoadClientOptions` function in the `envconfig` package to create a Temporal Client using the `cloud` profile as follows. After loading the profile, you can also programmatically override specific connection options before creating the client. ```java {25-30,33-35,42-44} public class LoadProfile { private static final Logger logger = LoggerFactory.getLogger(LoadProfile.class); public static void main(String[] args) { String profileName = "cloud"; try { String configFilePath = Paths.get(LoadProfile.class.getResource("/config.toml").toURI()).toString(); logger.info("--- Loading '{}' profile from {} ---", profileName, configFilePath); // Load specific profile from file with environment variable overrides ClientConfigProfile profile = ClientConfigProfile.load( LoadClientConfigProfileOptions.newBuilder() .setConfigFilePath(configFilePath) .setConfigFileProfile(profileName) .build()); // Demonstrate programmatic override - fix the incorrect address from staging profile ClientConfigProfile.Builder profileBuilder = profile.toBuilder(); profileBuilder.setAddress("localhost:7233"); // Override the incorrect address profile = profileBuilder.build(); WorkflowServiceStubsOptions serviceStubsOptions = profile.toWorkflowServiceStubsOptions(); WorkflowClientOptions clientOptions = profile.toWorkflowClientOptions(); try { // Create the workflow client using the loaded configuration WorkflowClient client = WorkflowClient.newInstance( WorkflowServiceStubs.newServiceStubs(serviceStubsOptions), clientOptions); // Test the connection by getting system info var systemInfo = client .getWorkflowServiceStubs() .blockingStub() .getSystemInfo( io.temporal.api.workflowservice.v1.GetSystemInfoRequest.getDefaultInstance()); logger.info("✅ Client connected successfully!"); logger.info(" Server version: {}", systemInfo.getServerVersion()); } catch (Exception e) { logger.error("❌ Failed to connect: {}", e.getMessage()); } } catch (Exception e) { logger.error("Failed to load configuration: {}", e.getMessage(), e); System.exit(1); } } } ``` The following environment variables are required to connect to Temporal Cloud: - `TEMPORAL_NAMESPACE`: Your Namespace and Account ID combination in the format `.`. - `TEMPORAL_ADDRESS`: The gRPC endpoint for your Temporal Cloud Namespace. - `TEMPORAL_API_KEY`: Your API key value. Required if you are using API key authentication. - `TEMPORAL_TLS_CLIENT_CERT_DATA` or `TEMPORAL_TLS_CLIENT_CERT_PATH`: Your mTLS client certificate data or file path. Required if you are using mTLS authentication. - `TEMPORAL_TLS_CLIENT_KEY_DATA` or `TEMPORAL_TLS_CLIENT_KEY_PATH`: Your mTLS client private key data or file path. Required if you are using mTLS authentication. Ensure these environment variables exist in your environment before running your Java application. Import the `envconfig` package to set connection options for the Temporal Client using environment variables. The `MustLoadDefaultClientOptions` function will automatically load all environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](../environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. ```java {17-18,25-27} public class LoadFromFile { private static final Logger logger = LoggerFactory.getLogger(LoadFromFile.class); public static void main(String[] args) { try { ClientConfigProfile profile = ClientConfigProfile.load(LoadClientConfigProfileOptions.newBuilder().build()); WorkflowServiceStubsOptions serviceStubsOptions = profile.toWorkflowServiceStubsOptions(); WorkflowClientOptions clientOptions = profile.toWorkflowClientOptions(); try { // Create the workflow client using the loaded configuration WorkflowClient client = WorkflowClient.newInstance( WorkflowServiceStubs.newServiceStubs(serviceStubsOptions), clientOptions); // Test the connection by getting system info var systemInfo = client .getWorkflowServiceStubs() .blockingStub() .getSystemInfo( io.temporal.api.workflowservice.v1.GetSystemInfoRequest.getDefaultInstance()); logger.info("✅ Client connected successfully!"); logger.info(" Server version: {}", systemInfo.getServerVersion()); } catch (Exception e) { logger.error("❌ Failed to connect: {}", e.getMessage()); } } catch (Exception e) { logger.error("Failed to load configuration: {}", e.getMessage(), e); System.exit(1); } } } ``` Now, when instantiating a Temporal `client` in your Temporal Java SDK code, provide the API key with `WorkflowServiceStubsOptions` and the Namespace and Account ID in `WorkflowClient.newInstance`: ```java WorkflowServiceStubs service = WorkflowServiceStubs.newServiceStubs( WorkflowServiceStubsOptions.newBuilder() .addApiKey( () -> ) .setTarget() .setEnableHttps(true) ... .build()); WorkflowClient client = WorkflowClient.newInstance( service, WorkflowClientOptions.newBuilder().setNamespace(.).build()); ``` To update the API key, update the `stubOptions`: ```java String myKey = ; WorkflowServiceStubsOptions stubOptions = WorkflowServiceStubsOptions.newBuilder() .addApiKey(() -> myKey) .build(); // Update by replacing, this must be done in a thread safe way myKey = "Bearer " + ; ``` To connect to Temporal Cloud using mTLS, you need to provide the mTLS CA certificate and mTLS private key. You can then use the `SimpleSslContextBuilder` to build an SSL context. Then use the `WorkflowServiceStubsOptions.Builder.setSslContext` method to set the SSL context. When you use a remote service, you don't use the `newLocalServicesStubs` convenience method. Instead, set your connection details as stub configuration options: [sample-apps/java/client/cloudserver-client-sample/src/main/java/clientsample/YourCallerApp.java](https://github.com/temporalio/documentation/blob/main/sample-apps/java/client/cloudserver-client-sample/src/main/java/clientsample/YourCallerApp.java) ```java // ... // Set the Service Stub options (SSL context and gRPC endpoint) WorkflowServiceStubsOptions stubsOptions = WorkflowServiceStubsOptions .newBuilder() .setSslContext(sslContext) .setTarget(gRPCEndpoint) .build(); // Create a stub that accesses a Temporal Service WorkflowServiceStubs serviceStub = WorkflowServiceStubs.newServiceStubs(stubsOptions); ``` Each Temporal Cloud service Client has four prerequisites. - The full Namespace Id from the [Cloud Namespace](https://cloud.temporal.io/namespaces) details page - The gRPC endpoint from the [Cloud Namespace](https://cloud.temporal.io/namespaces) details page - Your mTLS private key - Your mTLS x509 Certificate Retrieve these values before building your Client. The following sample generates an SSL context from the mTLS .pem and .key files. Along with the gRPC endpoint, this information configures a service stub for Temporal Cloud. Add the Namespace to your Client build options and initialize the new Client: [sample-apps/java/client/cloudserver-client-sample/src/main/java/clientsample/YourCallerApp.java](https://github.com/temporalio/documentation/blob/main/sample-apps/java/client/cloudserver-client-sample/src/main/java/clientsample/YourCallerApp.java) ```java // ... // Generate an SSL context InputStream clientCertInputStream = new FileInputStream(clientCertPath); InputStream clientKeyInputStream = new FileInputStream(clientKeyPath); SslContext sslContext = SimpleSslContextBuilder.forPKCS8(clientCertInputStream, clientKeyInputStream).build(); // Set the Service Stub options (SSL context and gRPC endpoint) WorkflowServiceStubsOptions stubsOptions = WorkflowServiceStubsOptions .newBuilder() .setSslContext(sslContext) .setTarget(gRPCEndpoint) .build(); // Create a stub that accesses a Temporal Service WorkflowServiceStubs serviceStub = WorkflowServiceStubs.newServiceStubs(stubsOptions); // Set the Client options WorkflowClientOptions clientOptions = WorkflowClientOptions .newBuilder() .setNamespace(namespace) .build(); // Initialize the Temporal Client // This application uses the Client to communicate with the Temporal Service WorkflowClient client = WorkflowClient.newInstance(serviceStub, clientOptions); ``` ## Start a Workflow Execution {#start-workflow-execution} **How to start a Workflow Execution using the Java SDK** [Workflow Execution](/workflow-execution) semantics rely on several parameters—that is, to start a Workflow Execution you must supply a Task Queue that will be used for the Tasks (one that a Worker is polling), the Workflow Type, language-specific contextual data, and Workflow Function parameters. In the examples below, all Workflow Executions are started using a Temporal Client. To spawn Workflow Executions from within another Workflow Execution, use either the [Child Workflow](/develop/java/child-workflows) or External Workflow APIs. See the [Customize Workflow Type](/develop/java/core-application#workflow-type) section to see how to customize the name of the Workflow Type. A request to spawn a Workflow Execution causes the Temporal Service to create the first Event ([WorkflowExecutionStarted](/references/events#workflowexecutionstarted)) in the Workflow Execution Event History. The Temporal Service then creates the first Workflow Task, resulting in the first [WorkflowTaskScheduled](/references/events#workflowtaskscheduled) Event. Use `WorkflowStub` to start a Workflow Execution from within a Client, and `ExternalWorkflowStub` to start a different Workflow Execution from within a Workflow. See [`SignalwithStart`](/develop/java/message-passing#signal-with-start) to start a Workflow Execution to receive a Signal from within another Workflow. **Using `WorkflowStub`** `WorkflowStub` is a proxy generated by the `WorkflowClient`. Each time a new Workflow Execution is started, an instance of the Workflow implementation object is created. Then, one of the methods (depending on the Workflow Type of the instance) annotated with `@WorkflowMethod` can be invoked. As soon as this method returns, the Workflow Execution is considered to be complete. You can use a typed or untyped `WorkflowStub` in the client code. - Typed `WorkflowStub` are useful because they are type safe and allow you to invoke your Workflow methods such as `@WorkflowMethod`, `@QueryMethod`, and `@SignalMethod` directly. - An untyped `WorkflowStub` does not use the Workflow interface, and is not type safe. It is more flexible because it has methods from the `WorkflowStub` interface, such as `start`, `signalWithStart`, `getResults` (sync and async), `query`, `signal`, `cancel` and `terminate`. Note that the Temporal Java SDK also provides typed `WorkflowStub` versions for these methods. When using untyped `WorkflowStub`, we rely on the Workflow Type, Activity Type, Child Workflow Type, as well as Query and Signal names. For details, see [Temporal Client](#connect-to-development-service). A Workflow Execution can be started either synchronously or asynchronously. - Synchronous invocation starts a Workflow and then waits for its completion. If the process that started the Workflow crashes or stops waiting, the Workflow continues executing. Because Workflows are potentially long-running, and Client crashes happen, it is not very commonly found in production use. The following example is a type-safe approach for starting a Workflow Execution synchronously. ```java NotifyUserAccounts workflow = client.newWorkflowStub( NotifyUserAccounts.class, WorkflowOptions.newBuilder() .setWorkflowId("notifyAccounts") .setTaskQueue(taskQueue) .build() ); // start the Workflow and wait for a result. workflow.notify(new String[] { "Account1", "Account2", "Account3", "Account4", "Account5", "Account6", "Account7", "Account8", "Account9", "Account10"}); } // notify(String[] accountIds) is a Workflow method defined in the Workflow Definition. ``` - Asynchronous start initiates a Workflow Execution and immediately returns to the caller. This is the most common way to start Workflows in production code. The [`WorkflowClient`](https://github.com/temporalio/sdk-java/blob/master/temporal-sdk/src/main/java/io/temporal/client/WorkflowClient.java) provides some static methods, such as `start`, `execute`, `signalWithStart` etc., that help with starting your Workflows asynchronously. The following examples show how to start Workflow Executions asynchronously, with either typed or untyped `WorkflowStub`. - **Typed WorkflowStub Example** ```java // create typed Workflow stub FileProcessingWorkflow workflow = client.newWorkflowStub(FileProcessingWorkflow.class, WorkflowOptions.newBuilder() .setTaskQueue(taskQueue) .setWorkflowId(workflowId) .build()); // use WorkflowClient.execute to return future that contains Workflow result or failure, or // use WorkflowClient.start to return WorkflowId and RunId of the started Workflow). WorkflowClient.start(workflow::greetCustomer); ``` - **Untyped WorkflowStub Example** ```java WorkflowStub untyped = client.newUntypedWorkflowStub("FileProcessingWorkflow", WorkflowOptions.newBuilder() .setWorkflowId(workflowId) .setTaskQueue(taskQueue) .build()); // blocks until Workflow Execution has been started (not until it completes) untyped.start(argument); ``` You can call a Dynamic Workflow implementation using an untyped `WorkflowStub`. The following example shows how to call the Dynamic Workflow implementation in the Client code. ```java WorkflowClient client = WorkflowClient.newInstance(service); /** * Note that for this part of the client code, the dynamic Workflow implementation must * be known to the Worker at runtime in order to dispatch Workflow tasks, and may be defined * in the Worker definition as:*/ // worker.registerWorkflowImplementationTypes(DynamicGreetingWorkflowImpl.class); /* Create the Workflow stub to call the dynamic Workflow. * Note that the Workflow Type is not explicitly registered with the Worker.*/ WorkflowOptions workflowOptions = WorkflowOptions.newBuilder().setTaskQueue(TASK_QUEUE).setWorkflowId(WORKFLOW_ID).build(); WorkflowStub workflow = client.newUntypedWorkflowStub("DynamicWF", workflowOptions); ``` `DynamicWorkflow` can be used to invoke different Workflow Types. To check what type is running when your Dynamic Workflow `execute` method runs, use `getWorkflowType()` in the implementation code. ```java String type = Workflow.getInfo().getWorkflowType(); ``` See [Workflow Execution Result](#get-workflow-results) for details on how to get the results of the Workflow Execution. **Using `ExternalWorkflowStub`** Use `ExternalWorkflowStub` within a Workflow to invoke, and send Signals to, other Workflows by type. This helps particularly for executing Workflows written in other language SDKs, as shown in the following example. ```java @Override public String yourWFMethod(String name) { ExternalWorkflowStub callOtherWorkflow = Workflow.newUntypedExternalWorkflowStub("OtherWFId"); } ``` See the [Temporal Polyglot](https://github.com/tsurdilo/temporal-polyglot) code for examples of executing Workflows written in other language SDKs. **Recurring start** You can start a Workflow Execution on a regular schedule by using [`setCronSchedule`](/develop/java/schedules#cron-schedule) Workflow option in the Client code. ### How to set a Workflow's Task Queue {#set-task-queue} In most SDKs, the only Workflow Option that must be set is the name of the [Task Queue](/task-queue). For your code to execute, a Worker Process must be running. This process needs a Worker Entity that is polling the same Task Queue name. Set the Workflow Task Queue with the [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) instance in the Client code using [`WorkflowOptions.Builder.setTaskQueue`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html). - Type: `String` - Default: none ```java //create Workflow stub for YourWorkflowInterface YourWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("YourWF") // Set the Task Queue .setTaskQueue(WorkerGreet.TASK_QUEUE) .build()); ``` ### How to set a Workflow Id {#workflow-id} Although it is not required, we recommend providing your own [Workflow Id](/workflow-execution/workflowid-runid#workflow-id) that maps to a business process or business entity identifier, such as an order identifier or customer identifier. Set the Workflow Id with the [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) instance in the Client code using [`WorkflowOptions.Builder.setWorkflowId​`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html). - Type: `String` - Default: none ```java //create Workflow stub for YourWorkflowInterface YourWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() // Set the Workflow Id .setWorkflowId("YourWF") .setTaskQueue(WorkerGreet.TASK_QUEUE) .build()); ``` ### Java WorkflowOptions reference {#workflow-options-reference} Create a [`newWorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) in the Temporal Client code, call the instance of the Workflow, and set the Workflow options with the [`WorkflowOptions.Builder`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html) class. The following fields are available: | Option | Required | Type | | ------------------------------------------------------- | -------------------- | -------------------------------------------------------------------------------------------------------------------- | | [`WorkflowId`](#workflowid) | No (but recommended) | String | | [`TaskQueue`](#taskqueue) | **Yes** | String | | [`WorkflowExecutionTimeout`](#workflowexecutiontimeout) | No | `Duration` | | [`WorkflowRunTimeout`](#workflowruntimeout) | No | `Duration` | | [`WorkflowTaskTimeout`](#workflowtasktimeout) | No | `Duration` | | [`WorkflowIdReusePolicy`](#workflowidreusepolicy) | No | `WorkflowIdReusePolicy` | | [`RetryOptions`](#retryoptions) | No | [`RetryOptions`](https://www.javadoc.io/static/io.temporal/temporal-sdk/1.17.0/io/temporal/common/RetryOptions.html) | | [`CronSchedule`](#cronschedule) | No | String | | [`Memo`](#memo) | No | string | | [`SearchAttributes`](#searchattributes) | No | `Map` | #### WorkflowId Set the Workflow Id with the [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) instance in the Client code using [`WorkflowOptions.Builder.setWorkflowId​`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html). - Type: `String` - Default: none ```java //create Workflow stub for YourWorkflowInterface YourWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() // Set the Workflow Id .setWorkflowId("YourWF") .setTaskQueue(WorkerGreet.TASK_QUEUE) .build()); ``` #### TaskQueue Set the Workflow Task Queue with the [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) instance in the Client code using [`WorkflowOptions.Builder.setTaskQueue`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html). - Type: `String` - Default: none ```java //create Workflow stub for YourWorkflowInterface YourWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("YourWF") // Set the Task Queue .setTaskQueue(WorkerGreet.TASK_QUEUE) .build()); ``` #### WorkflowExecutionTimeout Set the [Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout) with the [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) instance in the Client code using [`WorkflowOptions.Builder.setWorkflowExecutionTimeout`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html). - Type: `Duration` - Default: Unlimited ```java //create Workflow stub for YourWorkflowInterface YourWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("YourWF") .setTaskQueue(WorkerGreet.TASK_QUEUE) // Set Workflow Execution Timeout duration .setWorkflowExecutionTimeout(Duration.ofSeconds(10)) .build()); ``` #### WorkflowRunTimeout Set the Workflow Run Timeout with the [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) instance in the Client code using [`WorkflowOptions.Builder.setWorkflowRunTimeout`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html). - Type: `Duration` - Default: Same as [WorkflowExecutionTimeout](#workflowexecutiontimeout). ```java //create Workflow stub for YourWorkflowInterface YourWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("YourWF") .setTaskQueue(WorkerGreet.TASK_QUEUE) // Set Workflow Run Timeout duration .setWorkflowRunTimeout(Duration.ofSeconds(10)) .build()); ``` #### WorkflowTaskTimeout Set the Workflow Task Timeout with the [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) instance in the Client code using [`WorkflowOptions.Builder.setWorkflowTaskTimeout`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html). - Type: `Duration` - Default: 10 seconds. - Values: Maximum accepted value is 60 seconds. ```java //create Workflow stub for YourWorkflowInterface YourWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("YourWF") .setTaskQueue(WorkerGreet.TASK_QUEUE) // Set Workflow Task Timeout duration .setWorkflowTaskTimeout(Duration.ofSeconds(10)) .build()); ``` #### WorkflowIDReusePolicy - Type: `WorkflowIdReusePolicy` - Default: `AllowDuplicate` - Values: - `enums.AllowDuplicateFailedOnly`: The Workflow can start if the earlier Workflow Execution failed, Canceled, or Terminated. - `AllowDuplicate`: The Workflow can start regardless of the earlier Execution's closure status. - `RejectDuplicate`: The Workflow can not start if there is a earlier Run. ```java //create Workflow stub for GreetWorkflowInterface GreetWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("GreetWF") .setTaskQueue(WorkerGreet.TASK_QUEUE) // Set Workflow Id Reuse Policy .setWorkflowIdReusePolicy( WorkflowIdReusePolicy.WORKFLOW_ID_REUSE_POLICY_REJECT_DUPLICATE) .build()); ``` #### RetryOptions To set a Workflow Retry Options in the [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) instance use [`WorkflowOptions.Builder.setWorkflowRetryOptions`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html). - Type: `RetryOptions` - Default: `Null` which means no retries will be attempted. ```java //create Workflow stub for GreetWorkflowInterface GreetWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("GreetWF") .setTaskQueue(WorkerGreet.TASK_QUEUE) // Set Workflow Retry Options .setRetryOptions(RetryOptions.newBuilder() .build()); ``` #### CronSchedule A [Temporal Cron Job](/cron-job) is the series of Workflow Executions that occur when a Cron Schedule is provided in the call to spawn a Workflow Execution. A Cron Schedule is provided as an option when the call to spawn a Workflow Execution is made. Set the Cron Schedule with the [`WorkflowStub`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowStub.html) instance in the Client code using [`WorkflowOptions.Builder.setCronSchedule`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/client/WorkflowOptions.Builder.html). Setting `setCronSchedule` changes the Workflow Execution into a Temporal Cron Job. The default timezone for a Cron is UTC. - Type: `String` - Default: None ```java //create Workflow stub for YourWorkflowInterface YourWorkflowInterface workflow1 = YourWorker.yourclient.newWorkflowStub( YourWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("YourWF") .setTaskQueue(YourWorker.TASK_QUEUE) // Set Cron Schedule .setCronSchedule("* * * * *") .build()); ``` For more details, see the [HelloCron Sample](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/hello/HelloCron.java). #### Memo - Type: `String` - Default: None ```java //create Workflow stub for GreetWorkflowInterface GreetWorkflowInterface workflow1 = WorkerGreet.greetclient.newWorkflowStub( GreetWorkflowInterface.class, WorkflowOptions.newBuilder() .setWorkflowId("GreetWF") .setTaskQueue(WorkerGreet.TASK_QUEUE) // Set Memo. You can set additional non-indexed info via Memo .setMemo(ImmutableMap.of( "memoKey", "memoValue" )) .build()); ``` #### SearchAttributes Search Attributes are additional indexed information attributed to Workflow and used for search and visibility. These can be used in a query of List/Scan/Count Workflow APIs. The key and its value type must be registered on Temporal server side. - Type: `Map` - Default: None ```java private static void parentWorkflow() { ChildWorkflowOptions childworkflowOptions = ChildWorkflowOptions.newBuilder() // Set Search Attributes .setSearchAttributes(ImmutableMap.of("MySearchAttributeNAme", "value")) .build(); ``` The following Java types are supported: - String - Long, Integer, Short, Byte - Boolean - Double - OffsetDateTime - Collection of the types in this list. ### How to get the result of a Workflow Execution in Java {#get-workflow-results} If the call to start a Workflow Execution is successful, you will gain access to the Workflow Execution's Run Id. The Workflow Id, Run Id, and Namespace may be used to uniquely identify a Workflow Execution in the system and get its result. It's possible to both block progress on the result (synchronous execution) or get the result at some other point in time (asynchronous execution). In the Temporal Platform, it's also acceptable to use Queries as the preferred method for accessing the state and results of Workflow Executions. A synchronous Workflow Execution blocks your client thread until the Workflow Execution completes (or fails) and get the results (or error in case of failure). The following example is a type-safe approach for getting the results of a synchronous Workflow Execution. ```java FileProcessingWorkflow workflow = client.newWorkflowStub( FileProcessingWorkflow.class, WorkflowOptions.newBuilder() .setWorkflowId(workflowId) .setTaskQueue(taskQueue) .build(); // start sync and wait for results (or failure) String result = workflow.processfile(new Argument()); ``` An asynchronous Workflow Execution immediately returns a value to the caller. The following examples show how to get the results of a Workflow Execution through typed and untyped `WorkflowStub`. - **Typed WorkflowStub Example** ```java // create typed Workflow stub FileProcessingWorkflow workflow = client.newWorkflowStub(FileProcessingWorkflow.class, WorkflowOptions.newBuilder() .setTaskQueue(taskQueue) .setWorkflowId(workflowId) .build()); // use WorkflowClient.execute (if your Workflow takes in arguments) or WorkflowClient.start (for zero arguments) WorkflowClient.start(workflow::greetCustomer); ``` - **Untyped WorkflowStub Example** ```java WorkflowStub untyped = client.newUntypedWorkflowStub("FileProcessingWorkflow", WorkflowOptions.newBuilder() .setWorkflowId(workflowId) .setTaskQueue(taskQueue) .build()); // blocks until Workflow Execution has been started (not until it completes) untyped.start(argument); ``` If you need to wait for a Workflow Execution to complete after an asynchronous start, the most straightforward way is to call the blocking Workflow instance again. Note that if `WorkflowOptions.WorkflowIdReusePolicy` is not set to `AllowDuplicate`, then instead of throwing `DuplicateWorkflowException`, it reconnects to an existing Workflow and waits for its completion. The following example shows how to do this from a different process than the one that started the Workflow Execution. ```java YourWorkflow workflow = client.newWorkflowStub(YourWorkflow.class, workflowId); // Returns the result after waiting for the Workflow to complete. String result = workflow.yourMethod(); ``` Another way to connect to an existing Workflow and wait for its completion from another process, is to use `UntypedWorkflowStub`. For example: ```java WorkflowStub workflowStub = client.newUntypedWorkflowStub(workflowType, workflowOptions); // Returns the result after waiting for the Workflow to complete. String result = untyped.getResult(String.class); ``` **Get last (successful) completion result** For a Temporal Cron Job, get the result of previous successful runs using `GetLastCompletionResult()`. The method returns `null` if there is no previous completion. The following example shows how to implement this in a Workflow. ```java public String cronWorkflow() { String lastProcessedFileName = Workflow.getLastCompletionResult(String.class); // Process work starting from the lastProcessedFileName. // Business logic implementation goes here. // Updates lastProcessedFileName to the new value. return lastProcessedFileName; } ``` Note that this works even if one of the Cron schedule runs failed. The next schedule will still get the last successful result if it ever successfully completed at least once. For example, for a daily cron Workflow, if the run succeeds on the first day and fails on the second day, then the third day run will get the result from first day's run using these APIs. --- ## Temporal Nexus - Java SDK Feature Guide :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Java SDK support for Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability). ::: Use [Temporal Nexus](/evaluate/nexus) to connect Temporal Applications within and across Namespaces using a Nexus Endpoint, a Nexus Service contract, and Nexus Operations. This page shows how to do the following: - [Run a development Temporal Service with Nexus enabled](#run-the-temporal-nexus-development-server) - [Create caller and handler Namespaces](#create-caller-handler-namespaces) - [Create a Nexus Endpoint to route requests from caller to handler](#create-nexus-endpoint) - [Define the Nexus Service contract](#define-nexus-service-contract) - [Develop a Nexus Service and Operation handlers](#develop-nexus-service-operation-handlers) - [Develop a caller Workflow that uses a Nexus Service](#develop-caller-workflow-nexus-service) - [Make Nexus calls across Namespaces with a development Server](#nexus-calls-across-namespaces-dev-server) - [Make Nexus calls across Namespaces in Temporal Cloud](#nexus-calls-across-namespaces-temporal-cloud) :::note This documentation uses source code derived from the [Java Nexus sample](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/nexus). ::: ## Run the Temporal Development Server with Nexus enabled {#run-the-temporal-nexus-development-server} Prerequisites: - [Install the latest Temporal CLI](https://learn.temporal.io/getting_started/java/dev_environment/#set-up-a-local-temporal-service-for-development-with-temporal-cli) (v1.3.0 or higher recommended) - [Install the latest Temporal Java SDK](https://learn.temporal.io/getting_started/java/dev_environment/#add-temporal-java-sdk-dependencies) (v1.28.0 or higher recommended) The first step in working with Temporal Nexus involves starting a Temporal server with Nexus enabled. ``` temporal server start-dev ``` This command automatically starts the Temporal development server with the Web UI, and creates the `default` Namespace. It uses an in-memory database, so do not use it for real use cases. The Temporal Web UI should now be accessible at [http://localhost:8233](http://localhost:8233), and the Temporal Server should now be available for client connections on `localhost:7233`. ## Create caller and handler Namespaces {#create-caller-handler-namespaces} Before setting up Nexus endpoints, create separate Namespaces for the caller and handler. ``` temporal operator namespace create --namespace my-target-namespace temporal operator namespace create --namespace my-caller-namespace ``` `my-target-namespace` will contain the Nexus Operation handler, and we will use a Workflow in `my-caller-namespace` to call that Operation handler. We use different namespaces to demonstrate cross-Namespace Nexus calls. ## Create a Nexus Endpoint to route requests from caller to handler {#create-nexus-endpoint} After establishing caller and handler Namespaces, the next step is to create a Nexus Endpoint to route requests. ``` temporal operator nexus endpoint create \ --name my-nexus-endpoint-name \ --target-namespace my-target-namespace \ --target-task-queue my-handler-task-queue ``` You can also use the Web UI to create the Namespaces and Nexus endpoint. ## Define the Nexus Service contract {#define-nexus-service-contract} Defining a clear contract for the Nexus Service is crucial for smooth communication. In this example, there is a service package that describes the Service and Operation names along with input/output types for caller Workflows to use the Nexus Endpoint. Each [Temporal SDK includes and uses a default Data Converter](https://docs.temporal.io/dataconversion). The default data converter encodes payloads in the following order: Null, Byte array, Protobuf JSON, and JSON. In a polyglot environment, that is where more than one language and SDK is being used to develop a Temporal solution, Protobuf and JSON are common choices. This example uses Java classes serialized into JSON. [core/src/main/java/io/temporal/samples/nexus/service/NexusService.java](https://github.com/temporalio/samples-java/blob/nexus-snip-sync/core/src/main/java/io/temporal/samples/nexus/service/NexusService.java) ```java @Service public interface NexusService { enum Language { EN, FR, DE, ES, TR } class HelloInput { private final String name; private final Language language; @JsonCreator(mode = JsonCreator.Mode.PROPERTIES) public HelloInput( @JsonProperty("name") String name, @JsonProperty("language") Language language) { this.name = name; this.language = language; } @JsonProperty("name") public String getName() { return name; } @JsonProperty("language") public Language getLanguage() { return language; } } class HelloOutput { private final String message; @JsonCreator(mode = JsonCreator.Mode.PROPERTIES) public HelloOutput(@JsonProperty("message") String message) { this.message = message; } @JsonProperty("message") public String getMessage() { return message; } } class EchoInput { private final String message; @JsonCreator(mode = JsonCreator.Mode.PROPERTIES) public EchoInput(@JsonProperty("message") String message) { this.message = message; } @JsonProperty("message") public String getMessage() { return message; } } class EchoOutput { private final String message; @JsonCreator(mode = JsonCreator.Mode.PROPERTIES) public EchoOutput(@JsonProperty("message") String message) { this.message = message; } @JsonProperty("message") public String getMessage() { return message; } } @Operation HelloOutput hello(HelloInput input); @Operation EchoOutput echo(EchoInput input); } ``` ## Develop a Nexus Service and Operation handlers {#develop-nexus-service-operation-handlers} Nexus Operation handlers are typically defined in the same Worker as the underlying Temporal primitives they abstract. Operation handlers can decide if a given Nexus Operation will be synchronous or asynchronous. They can execute arbitrary code, and invoke underlying Temporal primitives such as a Workflow, Query, Signal, or Update. The `io.temporal.nexus.*` packages have utilities to help create Nexus Operations: - `Nexus.getOperationContext().getWorkflowClient()` \- Get the Temporal Client that the Worker was initialized with for synchronous handlers backed by Temporal primitives such as Signals and Queries - `WorkflowRunOperation.fromWorkflowMethod` \- Run a Workflow as an asynchronous Nexus Operation This example starts with a sync Operation handler example using the `OperationHandler.sync` method, and then shows how to create an async Operation handler that uses `WorkflowRunOperation.fromWorkflowMethod` to start a handler Workflow from a Nexus Operation. ### Develop a Synchronous Nexus Operation handler The `OperationHandler.sync` method is for exposing simple RPC handlers. Its handler function can access an SDK client that can be used for signaling, querying, and listing Workflows. However, implementations are free to make arbitrary calls to other services or databases, or perform computations such as this one: {/* SNIPSTART samples-java-nexus-handler {"selectedLines": ["1-16", "43"]} */} [core/src/main/java/io/temporal/samples/nexus/handler/NexusServiceImpl.java](https://github.com/temporalio/samples-java/blob/nexus-snip-sync/core/src/main/java/io/temporal/samples/nexus/handler/NexusServiceImpl.java) ```java // To create a service implementation, annotate the class with @ServiceImpl and provide the // interface that the service implements. The service implementation class should have methods that // return OperationHandler that correspond to the operations defined in the service interface. @ServiceImpl(service = NexusService.class) public class NexusServiceImpl { @OperationImpl public OperationHandler echo() { // OperationHandler.sync is a meant for exposing simple RPC handlers. return OperationHandler.sync( // The method is for making arbitrary short calls to other services or databases, or // perform simple computations such as this one. Users can also access a workflow client by // calling // Nexus.getOperationContext().getWorkflowClient(ctx) to make arbitrary calls such as // signaling, querying, or listing workflows. (ctx, details, input) -> new NexusService.EchoOutput(input.getMessage())); } // ... } ``` {/* SNIPEND */} ### Develop an Asynchronous Nexus Operation handler to start a Workflow Use the `WorkflowRunOperation.fromWorkflowMethod` method, which is the easiest way to expose a Workflow as an operation. [core/src/main/java/io/temporal/samples/nexus/handler/NexusServiceImpl.java](https://github.com/temporalio/samples-java/blob/nexus-snip-sync/core/src/main/java/io/temporal/samples/nexus/handler/NexusServiceImpl.java) ```java // To create a service implementation, annotate the class with @ServiceImpl and provide the // interface that the service implements. The service implementation class should have methods that // return OperationHandler that correspond to the operations defined in the service interface. @ServiceImpl(service = NexusService.class) public class NexusServiceImpl { // ... @OperationImpl public OperationHandler hello() { // Use the WorkflowRunOperation.fromWorkflowMethod constructor, which is the easiest // way to expose a workflow as an operation. To expose a workflow with a different input // parameters then the operation or from an untyped stub, use the // WorkflowRunOperation.fromWorkflowHandler constructor and the appropriate constructor method // on WorkflowHandle. return WorkflowRunOperation.fromWorkflowMethod( (ctx, details, input) -> Nexus.getOperationContext() .getWorkflowClient() .newWorkflowStub( HelloHandlerWorkflow.class, // Workflow IDs should typically be business meaningful IDs and are used to // dedupe workflow starts. // For this example, we're using the request ID allocated by Temporal when // the // caller workflow schedules // the operation, this ID is guaranteed to be stable across retries of this // operation. // // Task queue defaults to the task queue this operation is handled on. WorkflowOptions.newBuilder().setWorkflowId(details.getRequestId()).build()) ``` Workflow IDs should typically be business-meaningful IDs and are used to dedupe Workflow starts. In general, the ID should be passed in the Operation input as part of the Nexus Service contract. :::tip RESOURCES [Attach multiple Nexus callers to a handler Workflow](/nexus/operations#attaching-multiple-nexus-callers) with a Conflict-Policy of Use-Existing. ::: #### Map a Nexus Operation input to multiple Workflow arguments A Nexus Operation can only take one input parameter. If you want a Nexus Operation to start a Workflow that takes multiple arguments use the `WorkflowRunOperation.fromWorkflowHandle` method. [core/src/main/java/io/temporal/samples/nexusmultipleargs/handler/NexusServiceImpl.java](https://github.com/temporalio/samples-java/blob/nexus-snip-sync/core/src/main/java/io/temporal/samples/nexusmultipleargs/handler/NexusServiceImpl.java) ```java // To create a service implementation, annotate the class with @ServiceImpl and provide the // interface that the service implements. The service implementation class should have methods that // return OperationHandler that correspond to the operations defined in the service interface. @ServiceImpl(service = NexusService.class) public class NexusServiceImpl { @OperationImpl public OperationHandler echo() { // OperationHandler.sync is a meant for exposing simple RPC handlers. return OperationHandler.sync( // The method is for making arbitrary short calls to other services or databases, or // perform simple computations such as this one. Users can also access a workflow client by // calling // Nexus.getOperationContext().getWorkflowClient(ctx) to make arbitrary calls such as // signaling, querying, or listing workflows. (ctx, details, input) -> new NexusService.EchoOutput(input.getMessage())); } @OperationImpl public OperationHandler hello() { // If the operation input parameters are different from the workflow input parameters, // use the WorkflowRunOperation.fromWorkflowHandler constructor and the appropriate constructor // method on WorkflowHandle to map the Nexus input to the workflow parameters. return WorkflowRunOperation.fromWorkflowHandle( (ctx, details, input) -> WorkflowHandle.fromWorkflowMethod( Nexus.getOperationContext() .getWorkflowClient() .newWorkflowStub( HelloHandlerWorkflow.class, // Workflow IDs should typically be business meaningful IDs and are used // to // dedupe workflow starts. // For this example, we're using the request ID allocated by Temporal // when // the // caller workflow schedules // the operation, this ID is guaranteed to be stable across retries of // this // operation. // // Task queue defaults to the task queue this operation is handled on. WorkflowOptions.newBuilder() .setWorkflowId(details.getRequestId()) .build()) ::hello, input.getName(), input.getLanguage())); } } ``` ### Register a Nexus Service in a Worker After developing an asynchronous Nexus Operation handler to start a Workflow, the next step is to register a Nexus Service in a Worker. [core/src/main/java/io/temporal/samples/nexus/handler/HandlerWorker.java](https://github.com/temporalio/samples-java/blob/nexus-snip-sync/core/src/main/java/io/temporal/samples/nexus/handler/HandlerWorker.java) ```java package io.temporal.samples.nexus.handler; public class HandlerWorker { public static final String DEFAULT_TASK_QUEUE_NAME = "my-handler-task-queue"; public static void main(String[] args) { WorkflowClient client = ClientOptions.getWorkflowClient(args); WorkerFactory factory = WorkerFactory.newInstance(client); Worker worker = factory.newWorker(DEFAULT_TASK_QUEUE_NAME); worker.registerWorkflowImplementationTypes(HelloHandlerWorkflowImpl.class); worker.registerNexusServiceImplementation(new NexusServiceImpl()); factory.start(); } } ``` ## Develop a caller Workflow that uses the Nexus Service {#develop-caller-workflow-nexus-service} Import the Service API package that has the necessary service and operation names and input/output types to execute a Nexus Operation from the caller Workflow: [core/src/main/java/io/temporal/samples/nexus/caller/EchoCallerWorkflowImpl.java](https://github.com/temporalio/samples-java/blob/nexus-snip-sync/core/src/main/java/io/temporal/samples/nexus/caller/EchoCallerWorkflowImpl.java) ```java package io.temporal.samples.nexus.caller; public class EchoCallerWorkflowImpl implements EchoCallerWorkflow { NexusService nexusService = Workflow.newNexusServiceStub( NexusService.class, NexusServiceOptions.newBuilder() .setOperationOptions( NexusOperationOptions.newBuilder() .setScheduleToCloseTimeout(Duration.ofSeconds(10)) .build()) .build()); @Override public String echo(String message) { return nexusService.echo(new NexusService.EchoInput(message)).getMessage(); } } ``` [core/src/main/java/io/temporal/samples/nexus/caller/HelloCallerWorkflowImpl.java](https://github.com/temporalio/samples-java/blob/nexus-snip-sync/core/src/main/java/io/temporal/samples/nexus/caller/HelloCallerWorkflowImpl.java) ```java package io.temporal.samples.nexus.caller; public class HelloCallerWorkflowImpl implements HelloCallerWorkflow { NexusService nexusService = Workflow.newNexusServiceStub( NexusService.class, NexusServiceOptions.newBuilder() .setOperationOptions( NexusOperationOptions.newBuilder() .setScheduleToCloseTimeout(Duration.ofSeconds(10)) .build()) .build()); @Override public String hello(String message, NexusService.Language language) { NexusOperationHandle handle = Workflow.startNexusOperation( nexusService::hello, new NexusService.HelloInput(message, language)); // Optionally wait for the operation to be started. NexusOperationExecution will contain the // operation token in case this operation is asynchronous. handle.getExecution().get(); return handle.getResult().get().getMessage(); } } ``` ### Register the caller Workflow in a Worker After developing the caller Workflow, the next step is to register it with a Worker. [core/src/main/java/io/temporal/samples/nexus/caller/CallerWorker.java](https://github.com/temporalio/samples-java/blob/nexus-snip-sync/core/src/main/java/io/temporal/samples/nexus/caller/CallerWorker.java) ```java package io.temporal.samples.nexus.caller; public class CallerWorker { public static final String DEFAULT_TASK_QUEUE_NAME = "my-caller-workflow-task-queue"; public static void main(String[] args) { WorkflowClient client = ClientOptions.getWorkflowClient(args); WorkerFactory factory = WorkerFactory.newInstance(client); Worker worker = factory.newWorker(DEFAULT_TASK_QUEUE_NAME); worker.registerWorkflowImplementationTypes( WorkflowImplementationOptions.newBuilder() .setNexusServiceOptions( Collections.singletonMap( "NexusService", NexusServiceOptions.newBuilder().setEndpoint("my-nexus-endpoint-name").build())) .build(), EchoCallerWorkflowImpl.class, HelloCallerWorkflowImpl.class); factory.start(); } } ``` ### Develop a starter to start the caller Workflow To initiate the caller Workflow, a starter program is used. [core/src/main/java/io/temporal/samples/nexus/caller/CallerStarter.java](https://github.com/temporalio/samples-java/blob/nexus-snip-sync/core/src/main/java/io/temporal/samples/nexus/caller/CallerStarter.java) ```java package io.temporal.samples.nexus.caller; public class CallerStarter { private static final Logger logger = LoggerFactory.getLogger(CallerStarter.class); public static void main(String[] args) { WorkflowClient client = ClientOptions.getWorkflowClient(args); WorkflowOptions workflowOptions = WorkflowOptions.newBuilder().setTaskQueue(CallerWorker.DEFAULT_TASK_QUEUE_NAME).build(); EchoCallerWorkflow echoWorkflow = client.newWorkflowStub(EchoCallerWorkflow.class, workflowOptions); WorkflowExecution execution = WorkflowClient.start(echoWorkflow::echo, "Nexus Echo 👋"); logger.info( "Started EchoCallerWorkflow workflowId: {} runId: {}", execution.getWorkflowId(), execution.getRunId()); logger.info("Workflow result: {}", echoWorkflow.echo("Nexus Echo 👋")); HelloCallerWorkflow helloWorkflow = client.newWorkflowStub(HelloCallerWorkflow.class, workflowOptions); execution = WorkflowClient.start(helloWorkflow::hello, "Nexus", NexusService.Language.EN); logger.info( "Started HelloCallerWorkflow workflowId: {} runId: {}", execution.getWorkflowId(), execution.getRunId()); logger.info("Workflow result: {}", helloWorkflow.hello("Nexus", NexusService.Language.ES)); } } ``` ## Make Nexus calls across Namespaces with a development Server {#nexus-calls-across-namespaces-dev-server} Follow the steps below to run the Nexus handler Worker, the Nexus caller Worker, and the starter app. ### Run Workers connected to a local development server Run the Nexus handler Worker: ```bash ./gradlew -q execute -PmainClass=io.temporal.samples.nexus.handler.HandlerWorker \ --args="-target-host localhost:7233 -namespace my-target-namespace" ``` In another terminal window, run the Nexus caller Worker: ```bash ./gradlew -q execute -PmainClass=io.temporal.samples.nexus.caller.CallerWorker \ --args="-target-host localhost:7233 -namespace my-caller-namespace" ``` ### Start a caller Workflow With the Workers running, the final step in the local development process is to start a caller Workflow. Run the starter: ```bash ./gradlew -q execute -PmainClass=io.temporal.samples.nexus.caller.CallerStarter \ --args="-target-host localhost:7233 -namespace my-caller-namespace" ``` This will result in: ``` [main] INFO i.t.s.nexus.caller.CallerStarter - Started workflow workflowId: 9b3de8ba-28ae-42fb-8087-bdedf4cecd39 runId: 404a2529-764d-4d1d-9de5-8a9475e40fba [main] INFO i.t.s.nexus.caller.CallerStarter - Workflow result: Nexus Echo 👋 [main] INFO i.t.s.nexus.caller.CallerStarter - Started workflow workflowId: 9cb29897-356a-4714-87b7-aa2f00784a46 runId: 7e71e62a-db50-49da-b081-24b61016a0fc [main] INFO i.t.s.nexus.caller.CallerStarter - Workflow result: ¡Hola! Nexus 👋 ``` ### Canceling a Nexus Operation {#canceling-a-nexus-operation} To cancel a Nexus Operation from within a Workflow, create a `CancellationScope` using the `Workflow.newCancellationScope` API. `Workflow.newCancellationScope` takes a `Runnable`. Any SDK methods started in this runnable, such as Nexus operations, will be associated with this scope. `Workflow.newCancellationScope` returns a new scope that, when the `cancel()` method is called, cancels the context and any SDK method that was started in the scope. The promise returned by `Workflow.startNexusOperation` is resolved when the operation finishes, whether it succeeds, fails, times out, or is canceled. Only asynchronous operations can be canceled in Nexus, as cancelation is sent using an operation token. The Workflow or other resources backing the operation may choose to ignore the cancelation request. If ignored, the operation may enter a terminal state. When a Nexus operation is started the caller can specify different cancellation types that will control how the caller reacts to cancellation: - `ABANDON` - Do not request cancellation of the operation. - `TRY_CANCEL` - Initiate a cancellation request and immediately report cancellation to the caller. Note that this type doesn't guarantee that cancellation is delivered to the operation handler if the caller exits before the delivery is done. - `WAIT_REQUESTED` ` Request cancellation of the operation and wait for confirmation that the request was received. Doesn't wait for actual cancellation. - `WAIT_COMPLETED` - Wait for operation completion. Operation may or may not complete as cancelled. The default is `WAIT_COMPLETED`. Users can set a different option on the `NexusServiceOptions` by calling `.setCancellationType()` on `NexusServiceOptions.Builder`. Once the caller Workflow completes, the caller's Nexus Machinery stops attempting to cancel operations that have not yet been canceled, letting them run to completion. It's okay to leave operations running in some use cases. To ensure cancelations are delivered, wait for all pending operations to deliver their cancellation requests before exiting the Workflow. See the [Nexus cancelation sample](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/nexuscancellation) for reference. ## Make Nexus calls across Namespaces in Temporal Cloud {#nexus-calls-across-namespaces-temporal-cloud} This section assumes you are already familiar with [how connect a Worker to Temporal Cloud](https://docs.temporal.io/develop/java/core-application#run-a-temporal-cloud-worker). The same [source code](https://github.com/temporalio/samples-go/tree/main/nexus) is used in this section, but the `tcld` CLI will be used to create Namespaces and the Nexus Endpoint, and mTLS client certificates will be used to securely connect the caller and handler Workers to their respective Temporal Cloud Namespaces. ### Install the latest `tcld` CLI and generate certificates To install the latest version of the `tcld` CLI, run the following command (on MacOS): ``` brew install temporalio/brew/tcld ``` If you don't already have certificates, you can generate them for mTLS Worker authentication using the command below: ``` tcld gen ca --org $YOUR_ORG_NAME --validity-period 1y --ca-cert ca.pem --ca-key ca.key ``` These certificates will be valid for one year. ### Create caller and handler Namespaces Before deploying to Temporal Cloud, ensure that the appropriate Namespaces are created for both the caller and handler. If you already have these Namespaces, you don't need to do this. ``` tcld login tcld namespace create \ --namespace \ --cloud-provider aws \ --region us-west-2 \ --ca-certificate-file 'path/to/your/ca.pem' \ --retention-days 1 tcld namespace create \ --namespace \ --cloud-provider aws \ --region us-west-2 \ --ca-certificate-file 'path/to/your/ca.pem' \ --retention-days 1 ``` Alternatively, you can create Namespaces through the UI: [https://cloud.temporal.io/Namespaces](https://cloud.temporal.io/Namespaces). ### Create a Nexus Endpoint to route requests from caller to handler To create a Nexus Endpoint you must have a Developer account role or higher, and have NamespaceAdmin permission on the `--target-namespace`. ``` tcld nexus endpoint create \ --name \ --target-task-queue my-handler-task-queue \ --target-namespace \ --allow-namespace \ --description-file ./core/src/main/java/io/temporal/samples/nexus/service/description.md ``` The `--allow-namespace` is used to build an Endpoint allowlist of caller Namespaces that can use the Nexus Endpoint, as described in Runtime Access Control. Alternatively, you can create a Nexus Endpoint through the UI: [https://cloud.temporal.io/nexus](https://cloud.temporal.io/nexus). ### Run Workers Connected to Temporal Cloud Run the handler Worker: ``` ./gradlew -q execute -PmainClass=io.temporal.samples.nexus.handler.HandlerWorker \ --args="-target-host .tmprl.cloud:7233 \ -namespace \ -client-cert 'path/to/your/ca.pem' \ -client-key 'path/to/your/ca.key'" ``` Run the caller Worker: ``` ./gradlew -q execute -PmainClass=io.temporal.samples.nexus.caller.CallerWorker \ --args="-target-host .tmprl.cloud:7233 \ -namespace \ -client-cert 'path/to/your/ca.pem' \ -client-key 'path/to/your/ca.key'" ``` ### Start a caller Workflow ``` ./gradlew -q execute -PmainClass=io.temporal.samples.nexus.caller.CallerStarter \ --args="-target-host .tmprl.cloud:7233 \ -namespace \ -client-cert 'path/to/your/ca.pem' \ -client-key 'path/to/your/ca.key'" ``` This will result in: ``` [main] INFO i.t.s.nexus.caller.CallerStarter - Started workflow workflowId: 9b3de8ba-28ae-42fb-8087-bdedf4cecd39 runId: 404a2529-764d-4d1d-9de5-8a9475e40fba [main] INFO i.t.s.nexus.caller.CallerStarter - Workflow result: Nexus Echo 👋 [main] INFO i.t.s.nexus.caller.CallerStarter - Started workflow workflowId: 9cb29897-356a-4714-87b7-aa2f00784a46 runId: 7e71e62a-db50-49da-b081-24b61016a0fc [main] INFO i.t.s.nexus.caller.CallerStarter - Workflow result: ¡Hola! Nexus 👋 ``` ## Observability ### Web UI A synchronous Nexus Operation will surface in the caller Workflow as follows, with just `NexusOperationScheduled` and `NexusOperationCompleted` events in the caller's Workflow history: An asynchronous Nexus Operation will surface in the caller Workflow as follows, with `NexusOperationScheduled`, `NexusOperationStarted`, and `NexusOperationCompleted`, in the caller's Workflow history: ### Temporal CLI Use the `workflow describe` command to show pending Nexus Operations in the caller Workflow and any attached callbacks on the handler Workflow: ``` temporal workflow describe -w ``` Nexus events are included in the caller's Workflow history: ``` temporal workflow show -w ``` For **asynchronous Nexus Operations** the following are reported in the caller's history: - `NexusOperationScheduled` - `NexusOperationStarted` - `NexusOperationCompleted` For **synchronous Nexus Operations** the following are reported in the caller's history: - `NexusOperationScheduled` - `NexusOperationCompleted` :::note `NexusOperationStarted` isn't reported in the caller's history for synchronous operations. ::: ## Learn more - Read the high-level description of the [Temporal Nexus feature](/evaluate/nexus) and watch the [Nexus keynote and demo](https://youtu.be/qqc2vsv1mrU?feature=shared&t=2082). - Learn how Nexus works in the [Nexus deep dive talk](https://www.youtube.com/watch?v=izR9dQ_eIe4) and [Encyclopedia](/nexus). - Deploy Nexus Endpoints in production with [Temporal Cloud](/cloud/nexus). --- ## Testing - Java SDK The Testing section of the Temporal Application development guide describes the frameworks that facilitate Workflow and integration testing. In the context of Temporal, you can create these types of automated tests: - **End-to-end:** Running a Temporal Server and Worker with all its Workflows, Activities, and Nexus Operations; starting and interacting with Workflows from a Client. - **Integration:** Anything between end-to-end and unit testing. - Running Activities with mocked Context and other SDK imports (and usually network requests). - Running Workers with mock Activities and Nexus Operations, and using a Client to start Workflows. - Running Workflows with mocked SDK imports. - **Unit:** Running a piece of Workflow, Activity, or Nexus Operation code (a function or method) and mocking any code it calls. We generally recommend writing the majority of your tests as integration tests. Because the test server supports skipping time, use the test server for both end-to-end and integration tests with Workers. ## Test frameworks {#test-frameworks} The Temporal Java SDK provides a test framework to facilitate Workflow unit and integration testing. The test framework provides a `TestWorkflowEnvironment` class which includes an in-memory implementation of the Temporal service that supports automatic time skipping. This allows you to easily test long-running Workflows in seconds, without having to change your Workflow code. You can use the provided `TestWorkflowEnvironment` with a Java unit testing framework of your choice, such as JUnit. ### Setup testing dependency To start using the Java SDK test framework, you need to add [`io.temporal:temporal-testing`](https://search.maven.org/artifact/io.temporal/temporal-testing) as a dependency to your project: **[Apache Maven](https://maven.apache.org/):** ```maven io.temporal temporal-testing 1.17.0 test ``` **[Gradle Groovy DSL](https://gradle.org/):** ```groovy testImplementation ("io.temporal:temporal-testing:1.17.0") ``` Make sure to set the version that matches your dependency version of the [Temporal Java SDK](https://github.com/temporalio/sdk-java). ### Sample unit tests The following code implements unit tests for the `HelloActivity` sample: ```java public class HelloActivityTest { private TestWorkflowEnvironment testEnv; private Worker worker; private WorkflowClient client; // Set up the test workflow environment @Before public void setUp() { testEnv = TestWorkflowEnvironment.newInstance(); worker = testEnv.newWorker(TASK_QUEUE); // Register your workflow implementations worker.registerWorkflowImplementationTypes(GreetingWorkflowImpl.class); client = testEnv.getWorkflowClient(); } // Clean up test environment after tests are completed @After public void tearDown() { testEnv.close(); } @Test public void testActivityImpl() { // This uses the actual activity impl worker.registerActivitiesImplementations(new GreetingActivitiesImpl()); // Start test environment testEnv.start(); // Create the workflow stub GreetingWorkflow workflow = client.newWorkflowStub( GreetingWorkflow.class, WorkflowOptions.newBuilder().setTaskQueue(TASK_QUEUE).build()); // Execute our workflow waiting for it to complete String greeting = workflow.getGreeting("World"); assertEquals("Hello World!", greeting); } } ``` In cases where you do not wish to execute your actual Activity or Nexus Operation implementations during unit testing, you can use a framework such as Mockito to mock them. The following code implements a unit test for the `HelloActivity` sample which shows how activities can be mocked: ```java public class HelloActivityTest { private TestWorkflowEnvironment testEnv; private Worker worker; private WorkflowClient client; // Set up the test workflow environment @Before public void setUp() { testEnv = TestWorkflowEnvironment.newInstance(); worker = testEnv.newWorker(TASK_QUEUE); // Register your workflow implementations worker.registerWorkflowImplementationTypes(GreetingWorkflowImpl.class); client = testEnv.getWorkflowClient(); } // Clean up test environment after tests are completed @After public void tearDown() { testEnv.close(); } @Test public void testMockedActivity() { // Mock our workflow activity GreetingActivities activities = mock(GreetingActivities.class); when(activities.composeGreeting("Hello", "World")).thenReturn("Hello Mocked World!"); worker.registerActivitiesImplementations(activities); // Start test environment testEnv.start(); // Create the workflow stub GreetingWorkflow workflow = client.newWorkflowStub( GreetingWorkflow.class, WorkflowOptions.newBuilder().setTaskQueue(TASK_QUEUE).build()); // Execute our workflow waiting for it to complete String greeting = workflow.getGreeting("World"); assertEquals("Hello Mocked World!", greeting); } } ``` ### Testing with JUnit4 For Junit4 tests, Temporal provides the TestWorkflowRule class which simplifies the Temporal test environment setup, as well as the creation and shutdown of Workflow Workers in your tests. Make sure to set the version that matches your dependency version of the [Temporal Java SDK](https://github.com/temporalio/sdk-java). We can now rewrite our above mentioned "HelloActivityTest" test class as follows: ```java public class HelloActivityJUnit4Test { @Rule public TestWorkflowRule testWorkflowRule = TestWorkflowRule.newBuilder() .setWorkflowTypes(GreetingWorkflowImpl.class) .setActivityImplementations(new GreetingActivitiesImpl()) .build(); @Test public void testActivityImpl() { // Get a workflow stub using the same task queue the worker uses. GreetingWorkflow workflow = testWorkflowRule .getWorkflowClient() .newWorkflowStub( GreetingWorkflow.class, WorkflowOptions.newBuilder().setTaskQueue(testWorkflowRule.getTaskQueue()).build()); // Execute a workflow waiting for it to complete. String greeting = workflow.getGreeting("World"); assertEquals("Hello World!", greeting); testWorkflowRule.getTestEnvironment().shutdown(); } } ``` ### Testing with JUnit5 For Junit5 tests, Temporal also provides the TestWorkflowExtension helper class. This class can be used to simplify the Temporal test environment setup as well as Workflow Worker startup and shutdowns. To start using JUnit5 TestWorkflowExtension in your tests with [Gradle](https://gradle.org/), you need to enable capability [`io.temporal:temporal-testing-junit5`]: Make sure to set the version that matches your dependency version of the [Temporal Java SDK](https://github.com/temporalio/sdk-java). We can now use JUnit5 and rewrite our above mentioned "HelloActivityTest" test class as follows: ```java public class HelloActivityJUnit5Test { @RegisterExtension public static final TestWorkflowExtension testWorkflowExtension = TestWorkflowExtension.newBuilder() .setWorkflowTypes(GreetingWorkflowImpl.class) .setActivityImplementations(new GreetingActivitiesImpl()) .build(); @Test public void testActivityImpl( TestWorkflowEnvironment testEnv, Worker worker, GreetingWorkflow workflow) { // Execute a workflow waiting for it to complete. String greeting = workflow.getGreeting("World"); assertEquals("Hello World!", greeting); } } ``` You can find all unit tests for the [Temporal Java samples](https://github.com/temporalio/samples-java) repository in [its test package](https://github.com/temporalio/samples-java/tree/main/core/src/test/java/io/temporal/samples). ## Test Activities {#test-activities} Mocking isolates code undergoing testing so the focus remains on the code, and not on external dependencies or other state. You can test Activities using a mocked Activity environment. This approach offers a way to mock the Activity context, listen to Heartbeats, and cancel the Activity. You test the Activity in isolation, calling it directly without needing to create a Worker to run it. Temporal provides the `TestActivityEnvironment` and `TestActivityExtension` classes for testing Activities outside the scope of a Workflow. Testing Activities are similar to testing non-Temporal Java code. For example, you can test an Activity for: - Exceptions thrown when invoking the Activity Execution. - Exceptions thrown when checking for the result of the Activity Execution. - Activity's return values. Check that the return value matches the expected value. ### Run an Activity {#run-an-activity} During isolation testing, if an Activity references its context, you'll need to mock that context. Mocked information stands in for the context, allowing you to focus your testing on the Activity's code. ### Listen to Heartbeats {#listen-to-heartbeats} Activities usually issue periodic Heartbeats, a feature that broadcasts recurring proof-of-life updates. Each ping shows that an Activity is making progress and the Worker hasn't crashed. Heartbeats may include details that report task progress in the event an Activity Worker crashes. When testing Activities that support Heartbeats, make sure you can see those Heartbeats in your test code. Provide appropriate test coverage. This enables you to verify both the Heartbeat's content and behavior. ### Cancel an Activity {#cancel-an-activity} Activity cancellation lets Activities know they don't need to continue work and gives time for the Activity to clean up any resources it's created. You can cancel Java-based activities if they emit Heartbeats. To test an Activity that reacts to Cancellations, make sure that the Activity reacts correctly and cancels. ## Testing Workflows {#test-workflows} ### How to mock Activities {#mock-activities} Mock the Activity invocation when unit testing your Workflows. When integration testing Workflows with a Worker, you can mock Activities by providing mock Activity implementations to the Worker. ### How to mock Nexus Operations {#mock-nexus-operations} Mock the Nexus Operation invocation when unit testing your Workflows. When integration testing Workflows with a Worker, you can mock Nexus Operations by providing mock Nexus Service implementations to the Worker. ### How to skip time {#skip-time} Some long-running Workflows can persist for months or even years. Implementing the test framework allows your Workflow code to skip time and complete your tests in seconds rather than the Workflow's specified amount. For example, if you have a Workflow sleep for a day, or have an Activity failure with a long retry interval, you don't need to wait the entire length of the sleep period to test whether the sleep function works. Instead, test the logic that happens after the sleep by skipping forward in time and complete your tests in a timely manner. The test framework included in most SDKs is an in-memory implementation of Temporal Server that supports skipping time. Time is a global property of an instance of `TestWorkflowEnvironment`: skipping time (either automatically or manually) applies to all currently running tests. If you need different time behaviors for different tests, run your tests in a series or with separate instances of the test server. For example, you could run all tests with automatic time skipping in parallel, and then all tests with manual time skipping in series, and then all tests without time skipping in parallel. #### Set up time skipping {#setting-up} Set up the time-skipping test framework in the SDK of your choice. {/* #### Skip time automatically {#automatic-method} */} You can skip time automatically in the SDK of your choice. Start a test server process that skips time as needed. For example, in the time-skipping mode, Timers, which include sleeps and conditional timeouts, are fast-forwarded except when Activities or Nexus Operation handlers are running. Nexus Operation handlers timeout after 10 seconds and time skipping is allowed while waiting for retries. {/* #### Skip time manually {#manual-method} */} Skip time manually in the SDK of your choice. ## How to Replay a Workflow Execution {#replay} Replay recreates the exact state of a Workflow Execution. You can replay a Workflow from the beginning of its Event History. Replay succeeds only if the [Workflow Definition](/workflow-definition) is compatible with the provided history from a deterministic point of view. When you test changes to your Workflow Definitions, we recommend doing the following as part of your CI checks: 1. Determine which Workflow Types or Task Queues (or both) will be targeted by the Worker code under test. 2. Download the Event Histories of a representative set of recent open and closed Workflows from each Task Queue, either programmatically using the SDK client or via the Temporal CLI. 3. Run the Event Histories through replay. 4. Fail CI if any error is encountered during replay. The following are examples of fetching and replaying Event Histories: To replay Workflow Executions, use the [WorkflowReplayer](https://www.javadoc.io/doc/io.temporal/temporal-testing/latest/io/temporal/testing/WorkflowReplayer.html) class in the `temporal-testing` package. In the following example, Event Histories are downloaded from the server, and then replayed. Note that this requires Advanced Visibility to be enabled. ```java // Note we assume you already have a WorkflowServiceStubs (`service`) and WorkflowClient (`client`) // in scope. ListWorkflowExecutionsRequest listWorkflowExecutionRequest = ListWorkflowExecutionsRequest.newBuilder() .setNamespace(client.getOptions().getNamespace()) .setQuery("TaskQueue = 'mytaskqueue'") .build(); ListWorkflowExecutionsResponse listWorkflowExecutionsResponse = service.blockingStub().listWorkflowExecutions(listWorkflowExecutionRequest); List histories = listWorkflowExecutionsResponse.getExecutionsList().stream() .map( (info) -> { GetWorkflowExecutionHistoryResponse weh = service.blockingStub().getWorkflowExecutionHistory( GetWorkflowExecutionHistoryRequest.newBuilder() .setNamespace(testEnvironment.getNamespace()) .setExecution(info.getExecution()) .build()); return new WorkflowExecutionHistory( weh.getHistory(), info.getExecution().getWorkflowId()); }) .collect(Collectors.toList()); WorkflowReplayer.replayWorkflowExecutions( histories, true, WorkflowA.class, WorkflowB.class, WorkflowC.class); ``` In the next example, a single history is loaded from a JSON file on disk: ```java File file = new File("my_history.json"); WorkflowReplayer.replayWorkflowExecution(file, MyWorkflow.class); ``` In both examples, if Event History is non-deterministic, an error is thrown. You can choose to wait until all histories have been replayed with `replayWorkflowExecutions` by setting the `failFast` argument to `false`. --- ## Durable Timers - Java SDK ## What is a Timer? {#timers} A Workflow can set a durable Timer for a fixed time period. In some SDKs, the function is called `sleep()`, and in others, it's called `timer()`. A Workflow can sleep for months. Timers are persisted, so even if your Worker or Temporal Service is down when the time period completes, as soon as your Worker and Temporal Service are back up, the `Workflow.sleep()` call resolves and your code continues executing. Sleeping is a resource-light operation: it does not tie up the process, and you can run millions of Timers off a single Worker. To set a Timer in Java, use [`sleep()`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/workflow/Workflow.html#sleep) and pass the number of seconds you want to wait before continuing. ```java sleep(5); ``` --- ## Versioning - Java SDK Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: - [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). The Worker Versioning feature allows you to tag your Workers and programmatically roll them out in versioned deployments, so that old Workers can run old code paths and new Workers can run new code paths. - [Versioning with Patching](#patching). This method works by adding branches to your code tied to specific revisions. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. :::danger Support for the experimental Worker Versioning method before 2025 will be removed from Temporal Server in March 2026. Refer to the [latest Worker Versioning docs](/worker-versioning) for guidance. You can still refer to the [Worker Versioning Legacy](worker-versioning-legacy) docs if needed. ::: ## Worker Versioning Temporal's [Worker Versioning](/production-deployment/worker-deployments/worker-versioning) feature allows you to tag your Workers and programmatically roll them out in Deployment Versions, so that old Workers can run old code paths and new Workers can run new code paths. This way, you can pin your Workflows to specific revisions, avoiding the need for patching. ## Versioning with Patching {#patching} ### Patching with GetVersion A Patch defines a logical branch in a Workflow for a specific change, similar to a feature flag. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. When you want to make substantive code changes that may affect existing Workflow Executions, create a patch. Note that there's no need to patch [Pinned Workflows](/worker-versioning). Consider the following Workflow Definition: ```java public void processFile(Arguments args) { String localName = null; String processedName = null; try { localName = activities.download(args.getSourceBucketName(), args.getSourceFilename()); processedName = activities.processFile(localName); activities.upload(args.getTargetBucketName(), args.getTargetFilename(), processedName); } finally { if (localName != null) { // File was downloaded. activities.deleteLocalFile(localName); } if (processedName != null) { // File was processed. activities.deleteLocalFile(processedName); } } } ``` Imagine you want to revise this Workflow by adding another Activity to calculate a file checksum. If an existing Workflow Execution was started by the original version of the Workflow code, where there was no `calculateChecksum()` Activity, and then resumed running on a new Worker where this Activity had been added, the server side Event History would be out of sync. This would cause the Workflow to fail with a nondeterminism error. To resolve this, you can use `workflow.GetVersion()` to patch to your Workflow: ```java public void processFile(Arguments args) { String localName = null; String processedName = null; try { localName = activities.download(args.getSourceBucketName(), args.getSourceFilename()); processedName = activities.processFile(localName); int version = Workflow.getVersion("checksumAdded", Workflow.DEFAULT_VERSION, 1); if (version == Workflow.DEFAULT_VERSION) { activities.upload(args.getTargetBucketName(), args.getTargetFilename(), processedName); } else { long checksum = activities.calculateChecksum(processedName); activities.uploadWithChecksum( args.getTargetBucketName(), args.getTargetFilename(), processedName, checksum); } } finally { if (localName != null) { // File was downloaded. activities.deleteLocalFile(localName); } if (processedName != null) { // File was processed. activities.deleteLocalFile(processedName); } } } ``` When `workflow.GetVersion()` is run for the new Workflow Execution, it records a marker in the Event History so that all future calls to `GetVersion` for this change id — `checksumAdded` in the example — on this Workflow Execution will always return the given version number, which is `1` in the example. After all the Workflow Executions prior to version 1 have left retention, you can remove the code for that version. ```java public void processFile(Arguments args) { String localName = null; String processedName = null; try { localName = activities.download(args.getSourceBucketName(), args.getSourceFilename()); processedName = activities.processFile(localName); // getVersion call is left here to ensure that any attempt to replay history // for a different version fails. It can be removed later when there is no possibility // of this happening. Workflow.getVersion("checksumAdded", 1, 1); long checksum = activities.calculateChecksum(processedName); activities.uploadWithChecksum( args.getTargetBucketName(), args.getTargetFilename(), processedName, checksum); } finally { if (localName != null) { // File was downloaded. activities.deleteLocalFile(localName); } if (processedName != null) { // File was processed. activities.deleteLocalFile(processedName); } } } ``` ### Adding Support for Versioned Workflow Visibility in the Event History In other Temporal SDKs, when you invoke `getVersion` or the patching API, the SDK records an `UpsertWorkflowSearchAttribute` Event in the history. This adds support for a custom query parameter in the web UI named `TemporalChangeVersion` that allows you to filter Workflows based on their version. The Java SDK does not automatically add this attribute, so you'll likely want to do it manually. Within your Workflow Implementation code you'll need to perform the following steps: #### Import the `SearchAttributes` class ```java ``` #### Define the `SearchAttributesKey` object This object will be used as the key within the search attributes. This is done as an instance variable. ```java public static final SearchAttributeKey> TEMPORAL_CHANGE_VERSION = SearchAttributeKey.forKeywordList("TemporalChangeVersion"); ``` #### Set the Search Attribute using `upsert` You should set this attribute when you make the call to `getVersion`. ```java int version = Workflow.getVersion("MovedThankYouAfterLoop", Workflow.DEFAULT_VERSION, 1); if (version != Workflow.DEFAULT_VERSION) { Workflow.upsertTypedSearchAttributes(Constants.TEMPORAL_CHANGE_VERSION .valueSet(Arrays.asList(("MovedThankYouAfterLoop-" + version)))); } ``` You should only set the attribute on new versions. #### Setting Attributes for Multiple `getVersion` Calls The code in the previous section works well for code that only has one call to `getVersion()`. However, you may encounter situations where you have to have multiple calls to `getVersion()` to handle multiple independent changes to your Workflow. In this case, you should create a list of all the version changes and then set the attribute value: ```java List list = new ArrayList(); int versionOne = Workflow.getVersion("versionOne", Workflow.DEFAULT_VERSION, 1); int versionTwo = Workflow.getVersion("versionTwo", Workflow.DEFAULT_VERSION, 1); if ( versionOne != Workflow.DEFAULT_VERSION ) { list.append("versionOne-" + versionOne); } if (versionTwo != Workflow.DEFAULT_VERSION) { list.append("versionTwo-" + versionTwo); } Workflow.upsertTypedSearchAttributes(Constants.TEMPORAL_CHANGE_VERSION.valueSet(list)); ``` Patching allows you to make changes to currently running Workflows. It is a powerful method for introducing compatible changes without introducing non-determinism errors. ### Workflow cutovers To understand why Patching is useful, it's helpful to demonstrate cutting over an entire Workflow. Since incompatible changes only affect open Workflow Executions of the same type, you can avoid determinism errors by creating a whole new Workflow when making changes. To do this, you can copy the Workflow Definition function, giving it a different name, and register both names with your Workers. For example, you would duplicate `PizzaWorkflow` as `PizzaWorkflowV2`: ```java @WorkflowInterface public interface PizzaWorkflow { @WorkflowMethod public OrderConfirmation pizzaWorkflow(PizzaOrder order); } public class PizzaWorkflowImpl{ @Override public OrderConfirmation pizzaWorkflow(PizzaOrder order){ // implementation code omitted for this example } } @WorkflowInterface public interface PizzaWorkflowV2 { @WorkflowMethod public OrderConfirmation pizzaWorkflow(PizzaOrder order); } public class PizzaWorkflowImplV2 implements PizzaWorkflowV2{ @Override public OrderConfirmation pizzaWorkflow(PizzaOrder order){ // implementation code omitted for this example } } ``` It is necessary to create a separate interface because a Workflow Interface can only have one Workflow Method. You would then need to update the Worker configuration, and any other identifier strings, to register both Workflow Types: ```java worker.registerWorkflowImplementationTypes(PizzaWorkflowImpl.class); worker.registerWorkflowImplementationTypes(PizzaWorkflowImplV2.class); ``` The downside of this method is that it requires you to duplicate code and to update any commands used to start the Workflow. This can become impractical over time. This method also does not provide a way to version any still-running Workflows -- it is essentially just a cutover, unlike Patching. ### Testing a Workflow for replay safety To determine whether your Workflow your needs a patch, or that you've patched it successfully, you should incorporate [Replay Testing](/develop/java/testing-suite#replay). --- ## Worker Versioning (Legacy) - Java SDK ## How to use Worker Versioning in Java (Deprecated) {#worker-versioning} :::caution This section is for a deprecated Worker Versioning API. Please redirect your attention to [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). See the [Pre-release README](https://github.com/temporalio/temporal/blob/main/docs/worker-versioning.md) for more information. ::: A Build ID corresponds to a deployment. If you don't already have one, we recommend a hash of the code--such as a Git SHA--combined with a human-readable timestamp. To use Worker Versioning, you need to pass a Build ID to your Java Worker and opt in to Worker Versioning. ### Assign a Build ID to your Worker and opt in to Worker Versioning You should understand assignment rules before completing this step. See the [Worker Versioning Pre-release README](https://github.com/temporalio/temporal/blob/main/docs/worker-versioning.md) for more information. To enable Worker Versioning for your worker, assign the Build ID--perhaps from an environment variable--and turn it on. ```java // ... WorkerOptions workerOptions = WorkerOptions.newBuilder() .setBuildId(buildId) .setUseBuildIdForVersioning(true) // ... .build(); Worker w = workerFactory.newWorker("your_task_queue_name", workerOptions); // ... ``` :::warning Importantly, when you start this Worker, it won't receive any tasks until you set up assignment rules. ::: ### Specify versions for Activities, Child Workflows, and Continue-as-New :::caution Java support for this feature is under construction! ::: By default, Activities, Child Workflows, and Continue-as-New Workflows are run on the build of the workflow that created them if they are also configured to run on the same Task Queue. When configured to run on a separate Task Queue, they will default to using the current assignment rules. If you want to override this behavior, you can specify your intent via the `setVersioningIntent` method on the `ActivityOptions`, `ChildWorkflowOptions`, or `ContinueAsNewOptions` objects. For example, if you want an Activity to use the latest assignment rules rather than inheriting from its parent: ```java // ... private final MyActivity activity = Workflow.newActivityStub( MyActivity.class, ActivityOptions.newBuilder() .setScheduleToCloseTimeout(Duration.ofSeconds(10)) .setVersioningIntent(VersioningIntent.VERSIONING_INTENT_USE_ASSIGNMENT_RULES) // ...other options .build() ); // ... ``` ### Tell the Task Queue about your Worker's Build ID (Deprecated) :::caution This section is for a deprecated Worker Versioning API. Please redirect your attention to [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). ::: Now you can use the SDK (or the Temporal CLI) to tell the Task Queue about your Worker's Build ID. You might want to do this as part of your CI deployment process. ```java // ... workflowClient.updateWorkerBuildIdCompatability( "your_task_queue_name", BuildIdOperation.newIdInNewDefaultSet("deadbeef")); ``` This code adds the `deadbeef` Build ID to the Task Queue as the sole version in a new version set, which becomes the default for the queue. New Workflows execute on Workers with this Build ID, and existing ones will continue to process by appropriately compatible Workers. If, instead, you want to add the Build ID to an existing compatible set, you can do this: ```java // ... workflowClient.updateWorkerBuildIdCompatability( "your_task_queue_name", BuildIdOperation.newCompatibleVersion("deadbeef", "some-existing-build-id")); ``` This code adds `deadbeef` to the existing compatible set containing `some-existing-build-id` and marks it as the new default Build ID for that set. You can also promote an existing Build ID in a set to be the default for that set: ```java // ... workflowClient.updateWorkerBuildIdCompatability( "your_task_queue_name", BuildIdOperation.promoteBuildIdWithinSet("deadbeef")); ``` You can also promote an entire set to become the default set for the queue. New Workflows will start using that set's default. ```java // ... workflowClient.updateWorkerBuildIdCompatability( "your_task_queue_name", BuildIdOperation.promoteSetByBuildId("deadbeef")); ``` --- ## Asynchronous Activity Completion - PHP SDK ## How to asynchronously complete an Activity {#asynchronous-activity-completion} [Asynchronous Activity Completion](/activity-execution#asynchronous-activity-completion) enables the Activity Function to return without the Activity Execution completing. There are three steps to follow: 1. The Activity provides the external system with identifying information needed to complete the Activity Execution. Identifying information can be a [Task Token](/activity-execution#task-token), or a combination of Namespace, Workflow Id, and Activity Id. 2. The Activity Function completes in a way that identifies it as waiting to be completed by an external system. 3. The Temporal Client is used to Heartbeat and complete the Activity. Sometimes Workflows need to perform certain operations in parallel. Invoking activity stub without the use of `yield` will return the Activity result promise which can be resolved at later moment. Calling `yield` on promise blocks until a result is available. > Activity promise also exposes `then` method to construct promise chains. > Read more about Promises [here](https://github.com/reactphp/promise). Alternatively you can explicitly wrap your code (including `yield` constucts) using `Workflow::async` which will execute nested code in parallel with main Workflow code. Call `yeild` on Promise returned by `Workflow::async` to merge execution result back to primary Workflow method. ```php public function greet(string $name): \Generator { // Workflow::async runs it's activities and child workflows in a separate coroutine. Use keyword yield to merge // it back to parent process. $first = Workflow::async( function () use ($name) { $hello = yield $this->greetingActivity->composeGreeting('Hello', $name); $bye = yield $this->greetingActivity->composeGreeting('Bye', $name); return $hello . '; ' . $bye; } ); $second = Workflow::async( function () use ($name) { $hello = yield $this->greetingActivity->composeGreeting('Hola', $name); $bye = yield $this->greetingActivity->composeGreeting('Chao', $name); return $hello . '; ' . $bye; } ); // blocks until $first and $second complete return (yield $first) . "\n" . (yield $second); } ``` **Async completion** There are certain scenarios when moving on from an Activity upon completion of its function is not possible or desirable. For example, you might have an application that requires user input to complete the Activity. You could implement the Activity with a polling mechanism, but a simpler and less resource-intensive implementation is to asynchronously complete a Temporal Activity. There are two parts to implementing an asynchronously completed Activity: 1. The Activity provides the information necessary for completion from an external system and notifies the Temporal service that it is waiting for that outside callback. 2. The external service calls the Temporal service to complete the Activity. The following example demonstrates the first part: [app/src/AsyncActivityCompletion/GreetingActivity.php](https://github.com/temporalio/samples-php/blob/main/app/src/AsyncActivityCompletion/GreetingActivity.php) ```php class GreetingActivity implements GreetingActivityInterface { private LoggerInterface $logger; public function __construct() { $this->logger = new Logger(); } /** * Demonstrates how to implement an Activity asynchronously. * When {@link Activity::doNotCompleteOnReturn()} is called, * the Activity implementation function that returns doesn't complete the Activity. */ public function composeGreeting(string $greeting, string $name): string { // In real life this request can be executed anywhere. By a separate service for example. $this->logger->info(sprintf('GreetingActivity token: %s', base64_encode(Activity::getInfo()->taskToken))); // Send the taskToken to the external service that will complete the Activity. // Return from the Activity a function indicating that Temporal should wait // for an async completion message. Activity::doNotCompleteOnReturn(); // When doNotCompleteOnReturn() is invoked the return value is ignored. return 'ignored'; } } ``` The following code demonstrates how to complete the Activity successfully using `WorkflowClient`: [app/src/AsyncActivityCompletion/CompleteCommand.php](https://github.com/temporalio/samples-php/blob/main/app/src/AsyncActivityCompletion/CompleteCommand.php) ```php $client = $this->workflowClient->newActivityCompletionClient(); // Complete the Activity. $client->completeByToken( base64_decode($input->getArgument('token')), $input->getArgument('message') ); ``` To fail the Activity, you would do the following: ```php // Fail the Activity. $activityClient->completeExceptionallyByToken($taskToken, new \Error("activity failed")); ``` --- ## Interrupt a Workflow - PHP SDK ## Cancel an Activity from a Workflow {#cancel-an-activity} Canceling an Activity from within a Workflow requires that the Activity Execution sends Heartbeats and sets a Heartbeat Timeout. If the Heartbeat is not invoked, the Activity cannot receive a cancellation request. When any non-immediate Activity is executed, the Activity Execution should send Heartbeats and set a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) to ensure that the server knows it is still working. When an Activity is canceled, an error is raised in the Activity at the next available opportunity. If cleanup logic needs to be performed, it can be done in a `finally` clause or inside a caught cancel error. However, for the Activity to appear canceled the exception needs to be re-raised. :::note Unlike regular Activities, [Local Activities](/local-activity) can be canceled if they don't send Heartbeats. Local Activities are handled locally, and all the information needed to handle the cancellation logic is available in the same Worker process. ::: ## Reset a Workflow Execution {#reset} Resetting a Workflow Execution terminates the current Workflow Execution and starts a new Workflow Execution from a point you specify in its Event History. Use reset when a Workflow is blocked due to a non-deterministic error or other issues that prevent it from completing. When you reset a Workflow, the Event History up to the reset point is copied to the new Workflow Execution, and the Workflow resumes from that point with the current code. Reset only works if you've fixed the underlying issue, such as removing non-deterministic code. Any progress made after the reset point will be discarded. Provide a reason when resetting, as it will be recorded in the Event History. 1. Navigate to the Workflow Execution details page, 2. Click the **Reset** button in the top right dropdown menu, 3. Select the Event ID to reset to, 4. Provide a reason for the reset, 5. Confirm the reset. The Web UI shows available reset points and creates a link to the new Workflow Execution after the reset completes. Use the `temporal workflow reset` command to reset a Workflow Execution: ```bash temporal workflow reset \ --workflow-id \ --event-id \ --reason "Reason for reset" ``` For example: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" ``` By default, the command resets the latest Workflow Execution in the `default` Namespace. Use `--run-id` to reset a specific run. Use `--namespace` to specify a different Namespace: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" \ --namespace my-namespace \ --tls-cert-path /path/to/cert.pem \ --tls-key-path /path/to/key.pem ``` Monitor the new Workflow Execution after resetting to ensure it completes successfully. --- ## Child Workflows - PHP SDK ## How to start a Child Workflow Execution {#child-workflows} A [Child Workflow Execution](/child-workflows) is a Workflow Execution that is scheduled from within another Workflow using a Child Workflow API. When using a Child Workflow API, Child Workflow related Events ([StartChildWorkflowExecutionInitiated](/references/events#startchildworkflowexecutioninitiated), [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted), [ChildWorkflowExecutionCompleted](/references/events#childworkflowexecutioncompleted), etc...) are logged in the Workflow Execution Event History. Always block progress until the [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted) Event is logged to the Event History to ensure the Child Workflow Execution has started. After that, Child Workflow Executions may be abandoned using the _Abandon_ [Parent Close Policy](/parent-close-policy) set in the Child Workflow Options. To be sure that the Child Workflow Execution has started, first call the Child Workflow Execution method on the instance of Child Workflow future, which returns a different future. Then get the value of an object that acts as a proxy for a result that is initially unknown, which is what waits until the Child Workflow Execution has spawned. Besides Activities, a Workflow can also start other Workflows. `Workflow::executeChildWorkflow` and `Workflow::newChildWorkflowStub` enables the scheduling of other Workflows from within a Workflow's implementation. The parent Workflow has the ability to monitor and impact the lifecycle of the Child Workflow, similar to the way it does for an Activity that it invoked. ```php // Use one stub per child workflow run $child = Workflow::newChildWorkflowStub( ChildWorkflowInterface::class, ChildWorkflowOptions::new() // Do not specify WorkflowId if you want Temporal to generate a unique Id // for the child execution. ->withWorkflowId('BID-SIMPLE-CHILD-WORKFLOW') ->withExecutionStartToCloseTimeout(DateInterval::createFromDateString('30 minutes')) ); // This is a non blocking call that returns immediately. // Use yield $child->workflowMethod(name) to call synchronously. $promise = $child->workflowMethod('value'); // Do something else here. try{ $value = yield $promise; } catch(TemporalException $e) { $logger->error('child workflow failed'); throw $e; } ``` Let's take a look at each component of this call. Before calling `$child->workflowMethod()`, you must configure `ChildWorkflowOptions` for the invocation. These options customize various execution timeouts, and are passed into the Workflow stub defined by the `Workflow::newChildWorkflowStub`. Once stub created you can invoke its Workflow method based on attribute `WorkflowMethod`. The method call returns immediately and returns a `Promise`. This allows you to execute more code without having to wait for the scheduled Workflow to complete. When you are ready to process the results of the Workflow, call the `yield $promise` method on the returned promise object. When a parent Workflow is cancelled by the user, the Child Workflow can be cancelled or abandoned based on a configurable child policy. You can also skip the stub part of Child Workflow initiation and use `Workflow::executeChildWorkflow` directly: ```php // Use one stub per child workflow run $childResult = yield Workflow::executeChildWorkflow( 'ChildWorkflowName', ['args'], ChildWorkflowOptions::new()->withWorkflowId('BID-SIMPLE-CHILD-WORKFLOW'), Type::TYPE_STRING // optional: defines the return type ); ``` #### How to set a Parent Close Policy {#parent-close-policy} A [Parent Close Policy](/parent-close-policy) determines what happens to a Child Workflow Execution if its Parent changes to a Closed status (Completed, Failed, or Timed Out). The default Parent Close Policy option is set to terminate the Child Workflow Execution. In PHP, a [Parent Close Policy](/parent-close-policy) is set via the `ChildWorkflowOptions` object and `withParentClosePolicy()` method. The possible values can be obtained from the [`ParentClosePolicy`](https://github.com/temporalio/sdk-php/blob/master/src/Workflow/ParentClosePolicy.php) class. - `POLICY_TERMINATE` - `POLICY_ABANDON` - `POLICY_REQUEST_CANCEL` Then `ChildWorkflowOptions` object is used to create a new Child Workflow object: ```php $child = Workflow::newUntypedChildWorkflowStub( 'child-workflow', ChildWorkflowOptions::new() ->withParentClosePolicy(ParentClosePolicy::POLICY_ABANDON) ); yield $child->start(); ``` In the snippet above we: 1. Create a new untyped Child Workflow stub with `Workflow::newUntypedChildWorkflowStub`. 2. Provide `ChildWorkflowOptions` object with Parent Close Policy set to `ParentClosePolicy::POLICY_ABANDON`. 3. Start Child Workflow Execution asynchronously using `yield` and method `start()`. We need `yield` here to ensure that a Child Workflow Execution starts before the parent closes. --- ## Continue-As-New - PHP SDK This page answers the following questions for PHP developers: - [What is Continue-As-New?](#what) - [How to Continue-As-New?](#how) - [When is it right to Continue-as-New?](#when) - [How to test Continue-as-New?](#how-to-test) ## What is Continue-As-New? {#what} [Continue-As-New](/workflow-execution/continue-as-new) lets a Workflow Execution close successfully and creates a new Workflow Execution. You can think of it as a checkpoint when your Workflow gets too long or approaches certain scaling limits. The new Workflow Execution is in the same [chain](/workflow-execution#workflow-execution-chain); it keeps the same Workflow Id but gets a new Run Id and a fresh Event History. It also receives your Workflow's usual parameters. ## How to Continue-As-New using the PHP SDK {#how} First, design your Workflow parameters so that you can pass in the "current state" when you Continue-As-New into the next Workflow run. This state is typically set to `None` for the original caller of the Workflow. View the source code {' '} in the context of the rest of the application code. ```php final class ClusterManagerInput { public function __construct( public ?ClusterManagerState $state = null, public bool $testContinueAsNew = false, ) {} } #[Workflow\WorkflowInterface] interface MessageHandlerWorkflowInterface { #[Workflow\WorkflowMethod] public function run(ClusterManagerInput $input); } ```` The test hook in the above snippet is covered [below](#how-to-test). Inside your Workflow, call the [`continueAsNew()`](https://php.temporal.io/classes/Temporal-Workflow.html#method_continueAsNew) function with the same type. This stops the Workflow right away and starts a new one. View the source code {' '} in the context of the rest of the application code. ```php Workflow::continueAsNew( Workflow::getInfo()->type->name, [new ClusterManagerInput($this->state, $input->testContinueAsNew)], ); ```` ### Considerations for Workflows with Message Handlers {#with-message-handlers} If you use Updates or Signals, don't call Continue-as-New from the handlers. Instead, wait for your handlers to finish in your main Workflow before you run `continueAsNew`. See the [`allHandlersFinished`](message-passing#wait-for-message-handlers) example for guidance. ## When is it right to Continue-as-New using the PHP SDK? {#when} Use Continue-as-New when your Workflow might hit [Event History Limits](/workflow-execution/event#event-history). Temporal tracks your Workflow's progress against these limits to let you know when you should Continue-as-New. Call `Workflow::getInfo()->shouldContinueAsNew` to check if it's time. ## How to test Continue-as-New using the PHP SDK {#how-to-test} Testing Workflows that naturally Continue-as-New may be time-consuming and resource-intensive. Instead, add a test hook to check your Workflow's Continue-as-New behavior faster in automated tests. For example, when `testContinueAsNew == true`, this sample creates a test-only variable called `$this->maxHistoryLength` and sets it to a small value. A helper method in the Workflow checks it each time it considers using Continue-as-New: View the source code {' '} in the context of the rest of the application code. ```php private function shouldContinueAsNew(): bool { if (Workflow::getInfo()->shouldContinueAsNew) { return true; } // This is just for ease-of-testing. In production, we trust temporal to tell us when to continue as new. if ($this->maxHistoryLength !== null && Workflow::getInfo()->historyLength > $this->maxHistoryLength) { return true; } return false; } ``` --- ## Core application - PHP SDK ## How to develop a basic Workflow {#develop-workflows} Workflows are the fundamental unit of a Temporal Application, and it all starts with the development of a [Workflow Definition](/workflow-definition). In the Temporal PHP SDK programming model, Workflows are a class method. Classes must implement interfaces that are annotated with `#[WorkflowInterface]`. The method that is the Workflow must be annotated with `#[WorkflowMethod]`. ```php use Temporal\Workflow\YourWorkflowInterface; use Temporal\Workflow\WorkflowMethod; #[WorkflowInterface] interface FileProcessingWorkflow { #[WorkflowMethod] public function processFile(Argument $args); } ``` ### How to define Workflow parameters {#workflow-parameters} Temporal Workflows may have any number of custom parameters. However, we strongly recommend that objects are used as parameters, so that the object's individual fields may be altered without breaking the signature of the Workflow. All Workflow Definition parameters must be serializable. A method annotated with `#[WorkflowMethod]` can have any number of parameters. We recommend passing a single parameter that contains all the input fields to allow for adding fields in a backward-compatible manner. Note that all inputs should be serializable to a byte array using the provided [DataConverter](https://github.com/temporalio/sdk-php/blob/master/src/DataConverter/DataConverterInterface.php) interface. The default implementation uses a JSON serializer, but an alternative implementation can be easily configured. You can create a custom object and pass it to the Workflow method, as shown in the following example: ```php #[WorkflowInterface] interface FileProcessingWorkflow { #[WorkflowMethod] public function processFile(Argument $args); } ``` ### How to define Workflow return parameters {#workflow-return-values} Workflow return values must also be serializable. Returning results, returning errors, or throwing exceptions is fairly idiomatic in each language that is supported. However, Temporal APIs that must be used to get the result of a Workflow Execution will only ever receive one of either the result or the error. A Workflow method returns a Generator. To properly typecast the Workflow's return value in the client code, use the `#[ReturnType()]` attribute. ```php #[WorkflowInterface] interface FileProcessingWorkflow { #[WorkflowMethod] #[ReturnType("string")] public function processFile(Argument $args); } ``` ### How to customize your Workflow Type {#workflow-type} Workflows have a Type that are referred to as the Workflow name. The following examples demonstrate how to set a custom name for your Workflow Type. To customize a Workflow Type, use the `WorkflowMethod` attribute to specify the name of Workflow. ```php #[WorkflowMethod(name)] ``` If a Workflow Type is not specified, then Workflow Type defaults to the interface name, which is `YourWorkflowDefinitionInterface` in this case. ```php #[WorkflowInterface] interface YourWorkflowDefinitionInterface { #[WorkflowMethod] public function processFile(Argument $args); } ``` ### How to develop Workflow logic {#workflow-logic-requirements} Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). Therefore, each language is limited to the use of certain idiomatic techniques. However, each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. \*\*Temporal uses the [Microsoft Azure Event Sourcing pattern](https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing) to recover the state of a Workflow object including its local variable values. In essence, every time a Workflow state has to be restored, its code is re-executed from the beginning. When replaying, side effects (such as Activity invocations) are ignored because they are already recorded in the Workflow event history. When writing Workflow logic, the replay is not visible, so the code should be written since it executes only once. This design puts the following constraints on the Workflow implementation: - Do not use any mutable global variables because multiple instances of Workflows are executed in parallel. - Do not call any non-deterministic functions like non seeded random or `UUID` directly from the Workflow code. Always do the following in the Workflow implementation code: - Don't perform any IO or service calls as they are not usually deterministic. Use Activities for this. - Only use `Workflow::now()` to get the current time inside a Workflow. - Call `yield Workflow::timer()` instead of `sleep()`. - Do not use any blocking SPL provided by PHP (i.e. `fopen`, `PDO`, etc) in **Workflow code**. - Use `yield Workflow::getVersion()` when making any changes to the Workflow code. Without this, any deployment of updated Workflow code might break already open Workflows. - Don't access configuration APIs directly from a Workflow because changes in the configuration might affect a Workflow Execution path. Pass it as an argument to a Workflow function or use an Activity to load it. Workflow method arguments and return values are serializable to a byte array using the provided [DataConverter](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/common/converter/DataConverter.html) interface. The default implementation uses JSON serializer, but you can use any alternative serialization mechanism. Make sure to annotate your `WorkflowMethod` using `ReturnType` to specify concrete return type. > You can not use the default return type declaration as Workflow methods are generators. The values passed to Workflows through invocation parameters or returned through a result value are recorded in the execution history. The entire execution history is transferred from the Temporal service to Workflow workers with every event that the Workflow logic needs to process. A large execution history can thus adversely impact the performance of your Workflow. Therefore, be mindful of the amount of data that you transfer via Activity invocation parameters or return values. Otherwise, no additional limitations exist on Activity implementations.\*\* ## How to develop a basic Activity {#develop-activities} One of the primary things that Workflows do is orchestrate the execution of Activities. An Activity is a normal function or method execution that's intended to execute a single, well-defined action (either short or long-running), such as querying a database, calling a third-party API, or transcoding a media file. An Activity can interact with world outside the Temporal Platform or use a Temporal Client to interact with a Temporal Service. For the Workflow to be able to execute the Activity, we must define the [Activity Definition](/activity-definition). Activities are defined as methods of a plain PHP interface annotated with `#[ActivityInterface]`. Following is an example of an interface that defines four Activities: ```php #[ActivityInterface] // Defining an interface for the activities. interface FileProcessingActivities { public function upload(string $bucketName, string $localName, string $targetName): void; #[ActivityMethod("transcode_file")] public function download(string $bucketName, string $remoteName): void; public function processFile(): string; public function deleteLocalFile(string $fileName): void; } ``` ### How to develop Activity Parameters {#activity-parameters} There is no explicit limit to the total number of parameters that an [Activity Definition](/activity-definition) may support. However, there is a limit to the total size of the data that ends up encoded into a gRPC message Payload. A single argument is limited to a maximum size of 2 MB. And the total size of a gRPC message, which includes all the arguments, is limited to a maximum of 4 MB. Also, keep in mind that all Payload data is recorded in the [Workflow Execution Event History](/workflow-execution/event#event-history) and large Event Histories can affect Worker performance. This is because the entire Event History could be transferred to a Worker Process with a [Workflow Task](/tasks#workflow-task). {/* TODO link to gRPC limit section when available */} Some SDKs require that you pass context objects, others do not. When it comes to your application data—that is, data that is serialized and encoded into a Payload—we recommend that you use a single object as an argument that wraps the application data passed to Activities. This is so that you can change what data is passed to the Activity without breaking a function or method signature. Each method defines a single Activity type. A single Workflow can use more than one Activity interface and call more than one Activity method from the same interface. The only requirement is that Activity method arguments and return values are serializable to a byte array using the provided [DataConverter](https://github.com/temporalio/sdk-php/blob/master/src/DataConverter/DataConverterInterface.php) interface. The default implementation uses a JSON serializer, but an alternative implementation can be easily configured. ### How to define Activity return values {#activity-return-values} All data returned from an Activity must be serializable. Activity return values are subject to payload size limits in Temporal. The default payload size limit is 2MB, and there is a hard limit of 4MB for any gRPC message size in the Event History transaction ([see Cloud limits here](https://docs.temporal.io/cloud/limits#per-message-grpc-limit)). Keep in mind that all return values are recorded in a [Workflow Execution Event History](/workflow-execution/event#event-history). Return values must be serializable to a byte array using the provided [DataConverter](https://github.com/temporalio/sdk-php/blob/master/src/DataConverter/DataConverterInterface.php) interface. The default implementation uses a JSON serializer, but an alternative implementation can be easily configured. Thus, you can return both primitive types: ```php class GreetingActivity implements GreetingActivityInterface { public function composeGreeting(string $greeting, string $name): string { return $greeting . ' ' . $name; } } ``` And objects: ```php class GreetingActivity implements GreetingActivityInterface { public function composeGreeting(string $greeting, string $name): Greeting { return new Greeting($greeting, $name); } } ``` ### How to customize your Activity Type {#activity-type} Activities have a Type that are referred to as the Activity name. The following examples demonstrate how to set a custom name for your Activity Type. An optional `#[ActivityMethod]` attribute can be used to override a default Activity name. You can define your own prefix for all Activity names by adding the `prefix` option to the `ActivityInterface` attribute. (The default prefix is empty.) ```php #[ActivityInterface("file_activities.")] interface FileProcessingActivities { public function upload(string $bucketName, string $localName, string $targetName); #[ActivityMethod("transcode_file")] public function download(string $bucketName, string $remoteName); public function processFile(): string; public function deleteLocalFile(string $fileName); } ``` The `#[ActivityInterface("file_activities.")]` is an attribute that tells the PHP SDK to generate a class to implement the `FileProcessingActivities` interface. The functions define Activities that are used in the Workflow. ## How to start an Activity Execution {#activity-execution} Calls to spawn [Activity Executions](/activity-execution) are written within a [Workflow Definition](/workflow-definition). The call to spawn an Activity Execution generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command. This results in the set of three [Activity Task](/tasks#activity-task) related Events ([ActivityTaskScheduled](/references/events#activitytaskscheduled), [ActivityTaskStarted](/references/events#activitytaskstarted), and ActivityTask[Closed])in your Workflow Execution Event History. A single instance of the Activities implementation is shared across multiple simultaneous Activity invocations. Activity implementation code should be _idempotent_. The values passed to Activities through invocation parameters or returned through a result value are recorded in the Execution history. The entire Execution history is transferred from the Temporal service to Workflow Workers when a Workflow state needs to recover. A large Execution history can thus adversely impact the performance of your Workflow. Therefore, be mindful of the amount of data you transfer through Activity invocation parameters or Return Values. Otherwise, no additional limitations exist on Activity implementations. Activity implementation is an implementation of an Activity interface. The following code example, uses a constructor that takes an Amazon S3 client and a local directory, and uploads a file to the S3 bucket. Then, the code uses a function to download a file from the S3 bucket passing a bucket name, remote name, and local name as arguments. Finally, it uses a function that takes a local file name as an argument and returns a string. ```php // An implementation of an Activity interface. class FileProcessingActivitiesImpl implements FileProcessingActivities { private S3Client $s3Client; private string $localDirectory; public function __construct(S3Client $s3Client, string $localDirectory) { $this->s3Client = $s3Client; $this->localDirectory = $localDirectory; } // Uploading a file to S3. public function upload(string $bucketName, string $localName, string $targetName): void { $this->s3Client->putObject( $bucketName, $targetName, fopen($this->localDirectory . $localName, 'rb+') ); } // Downloading a file from S3. public function download( string $bucketName, string $remoteName, string $localName ): void { $this->s3Client->downloadObject( $bucketName, $remoteName, fopen($this->localDirectory .$localName, 'wb+') ); } // A function that takes a local file name as an argument and returns a string. public function processFile(string $localName): string { // Implementation omitted for brevity. return compressFile($this->localDirectory . $localName); } public function deleteLocalFile(string $fileName): void { unlink($this->localDirectory . $fileName); } } ``` ### How to set the required Activity Timeouts {#required-timeout} Activity Execution semantics rely on several parameters. The only required value that needs to be set is either a [Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout) or a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout). These values are set in the Activity Options. ### How to get the results of an Activity Execution {#get-activity-results} The call to spawn an [Activity Execution](/activity-execution) generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command and provides the Workflow with an Awaitable. Workflow Executions can either block progress until the result is available through the Awaitable or continue progressing, making use of the result when it becomes available. `Workflow::newActivityStub`returns a client-side stub an implements an Activity interface. The client-side stub can be used within the Workflow code. It takes the Activity's type and`ActivityOptions` as arguments. Calling (via `yield`) a method on this interface invokes an Activity that implements this method. An Activity invocation synchronously blocks until the Activity completes, fails, or times out. Even if Activity Execution takes a few months, the Workflow code still sees it as a single synchronous invocation. It doesn't matter what happens to the processes that host the Workflow. The business logic code just sees a single method call. ```php class GreetingWorkflow implements GreetingWorkflowInterface { private $greetingActivity; public function __construct() { $this->greetingActivity = Workflow::newActivityStub( GreetingActivityInterface::class, ActivityOptions::new()->withStartToCloseTimeout(\DateInterval::createFromDateString('30 seconds')) ); } public function greet(string $name): \Generator { // This is a blocking call that returns only after the activity has completed. return yield $this->greetingActivity->composeGreeting('Hello', $name); } } ``` If different Activities need different options, like timeouts or a task queue, multiple client-side stubs can be created with different options. ```php $greetingActivity = Workflow::newActivityStub( GreetingActivityInterface::class, ActivityOptions::new()->withStartToCloseTimeout(\DateInterval::createFromDateString('30 seconds')) ); $greetingActivity = Workflow::newActivityStub( GreetingActivityInterface::class, ActivityOptions::new()->withStartToCloseTimeout(\DateInterval::createFromDateString('30 minutes')) ); ``` ## How to run Worker Processes {#run-a-dev-worker} The [Worker Process](/workers#worker-process) is where Workflow Functions and Activity Functions are executed. - Each [Worker Entity](/workers#worker-entity) in the Worker Process must register the exact Workflow Types and Activity Types it may execute. - Each Worker Entity must also associate itself with exactly one [Task Queue](/task-queue). - Each Worker Entity polling the same Task Queue must be registered with the same Workflow Types and Activity Types. A [Worker Entity](/workers#worker-entity) is the component within a Worker Process that listens to a specific Task Queue. Although multiple Worker Entities can be in a single Worker Process, a single Worker Entity Worker Process may be perfectly sufficient. For more information, see the [Worker tuning guide](/develop/worker-performance). A Worker Entity contains a Workflow Worker and/or an Activity Worker, which makes progress on Workflow Executions and Activity Executions, respectively. The [RoadRunner application server](https://roadrunner.dev/) will launch multiple Temporal PHP Worker processes based on provided `.rr.yaml` configuration. Each Worker might connect to one or multiple Task Queues. Worker poll _Temporal service_ for tasks, performs those tasks, and communicates task execution results back to the _Temporal service_. Worker code are developed, deployed, and operated by Temporal customers. To create a worker use `Temporal\WorkerFactory`: ```php newWorker(); // Workflows are stateful. So you need a type to create instances. $worker->registerWorkflowTypes(App\DemoWorkflow::class); // Activities are stateless and thread safe. So a shared instance is used. $worker->registerActivity(App\DemoActivity::class); // In case an activity class requires some external dependencies provide a callback - factory // that creates or builds a new activity instance. The factory should be a callable which accepts // an instance of ReflectionClass with an activity class which should be created. $worker->registerActivity(App\DemoActivity::class, fn(ReflectionClass $class) => $container->create($class->getName())); // start primary loop $factory->run(); ``` You can configure task queue name using first argument of `WorkerFactory`->`newWorker`: ```php $worker = $factory->newWorker('your-task-queue'); ``` As mentioned precedingyou can create as many Task Queue connections inside a single Worker Process as you need. To configure additional WorkerOptions use `Temporal\Worker\WorkerOptions`: ```php use Temporal\Worker\WorkerOptions; $worker = $factory->newWorker( 'your-task-queue', WorkerOptions::new() ->withMaxConcurrentWorkflowTaskPollers(10) ); ``` Make sure to point the Worker file in application server configuration: ```yaml rpc: listen: tcp://127.0.0.1:6001 server: command: 'php worker.php' temporal: address: 'temporal:7233' activities: num_workers: 10 ``` > You can serve HTTP endpoints using the same server setup. To provide the [API key](/cloud/api-keys) to RoadRunner use a `ServiceCredentials` DTO when creating the WorkerFactory: ```php use Temporal\Worker\ServiceCredentials; $workerFactory = \Temporal\WorkerFactory::create( credentials: ServiceCredentials::create()->withApiKey('your-api-key'), ); ``` [How to configure connection to a Temporal Cloud](/develop/php/temporal-client#connect-to-temporal-cloud) ### How to register types {#register-types} All Workers listening to the same Task Queue name must be registered to handle the exact same Workflows Types and Activity Types. If a Worker polls a Task for a Workflow Type or Activity Type it does not know about, it fails that Task. However, the failure of the Task does not cause the associated Workflow Execution to fail. Worker listens on a Task Queue and hosts both Workflow and Activity implementations: ```php // Workflows are stateful. So you need a type to create instances: $worker->registerWorkflowTypes(App\DemoWorkflow::class); // Activities are stateless and thread safe: $worker->registerActivity(App\DemoActivity::class); ``` In case an activity class requires some external dependencies provide a callback - factory that creates or builds a new activity instance. The factory should be a callable which accepts an instance of ReflectionClass with an activity class which should be created. ```php $worker->registerActivity( App\DemoActivity::class, fn(ReflectionClass $class) => $container->create($class->getName()) ); ``` If you want to clean up some resources after activity is done, you may register a finalizer. This callback is called after each activity invocation: ```php $worker->registerActivityFinalizer(fn() => $kernel->showtdown()); ``` --- ## Debugging - PHP SDK ## Debugging {#debug} ### How to debug in a development environment {#debug-in-a-development-environment} In addition to the normal development tools of logging and a debugger, you can also see what's happening in your Workflow by using the [Web UI](/web-ui) or [Temporal CLI](/cli). ### How to debug in a development production {#debug-in-a-development-production} You can debug production Workflows using: - [Web UI](/web-ui) - [Temporal CLI](/cli) You can debug and tune Worker performance with metrics and the [Worker performance guide](/develop/worker-performance). Debug Server performance with [Cloud metrics](/cloud/metrics/) or [self-hosted Server metrics](/self-hosted-guide/production-checklist#scaling-and-metrics). --- ## Enriching the User Interface - PHP SDK Temporal supports adding context to Workflows and Events with metadata. This helps users identify and understand Workflows and their operations. ## Adding Summary and Details to Workflows ### Starting a Workflow When starting a Workflow, you can provide a static summary and details to help identify the Workflow in the UI: ```php use Temporal\Client\WorkflowClient; use Temporal\Client\WorkflowOptions; // Create workflow client $workflowClient = WorkflowClient::create($serviceClient); // Start a workflow with static summary and details $workflow = $workflowClient->newWorkflowStub( YourWorkflow::class, WorkflowOptions::new() ->withWorkflowId('your-workflow-id') ->withTaskQueue('your-task-queue') ->withStaticSummary('Order processing for customer #12345') ->withStaticDetails('Processing premium order with expedited shipping') ); $result = $workflow->yourWorkflowMethod('workflow input'); ``` `withStaticSummary()` sets a single-line description that appears in the Workflow list view, limited to 200 bytes. `withStaticDetails()` sets multi-line comprehensive information that appears in the Workflow details view, with a larger limit of 20K bytes. The input format is standard Markdown excluding images, HTML, and scripts. You can also start a Workflow asynchronously: ```php // Start workflow asynchronously $workflowClient->start($workflow, 'workflow input'); ``` ### Adding Summary to Activities and Timers You can attach a `summary` to timers within a workflow: ```php use Temporal\Workflow; use Temporal\Workflow\TimerOptions; #[WorkflowInterface] interface YourWorkflow { #[WorkflowMethod] public function yourWorkflowMethod(string $input): string; } class YourWorkflowImpl implements YourWorkflow { public function yourWorkflowMethod(string $input): \Generator { // Create a timer with a summary yield Workflow::timer( 300, // 5 minutes in seconds TimerOptions::new()->withSummary('Waiting for payment confirmation') ); return 'Timer completed'; } } ``` For Activities, you can set a summary using Activity options: ```php use Temporal\Activity\ActivityOptions; use Temporal\Workflow; class YourWorkflowImpl implements YourWorkflow { private YourActivitiesInterface $activities; public function __construct() { $this->activities = Workflow::newActivityStub( YourActivitiesInterface::class, ActivityOptions::new() ->withStartToCloseTimeout('10 seconds') ->withSummary('Processing user data') ); } public function yourWorkflowMethod(string $input): \Generator { // Execute the activity with the summary $result = yield $this->activities->yourActivity($input); return $result; } } ``` The input format for `summary` is a string, and limited to 200 bytes. ## Viewing Summary and Details in the UI Once you've added summaries and details to your Workflows, Activities, and Timers, you can view this enriched information in the Temporal Web UI. Navigate to your Workflow's details page to see the metadata displayed in two key locations: ### Workflow Overview Section At the top of the Workflow details page, you'll find the Workflow-level metadata: - **Summary & Details** - Displays the static summary and static details set when starting the Workflow - **Current Details** - Displays the dynamic details that can be updated during Workflow Execution All Workflow details support standard Markdown formatting (excluding images, HTML, and scripts), allowing you to create rich, structured information displays. ### Event History Individual events in the Workflow's Event Gistory display their associated summaries when available. Workflow, Activity and Timer summaries appear in purple text next to their corresponding Events, providing immediate context without requiring you to expand the event details. When you do expand an Event, the summary is also prominently displayed in the detailed view. --- ## Failure detection - PHP SDK ## Workflow timeouts {#workflow-timeouts} Each Workflow timeout controls the maximum duration of a different aspect of a Workflow Execution. Before we continue, we want to note that we generally do not recommend setting Workflow Timeouts, because Workflows are designed to be long-running and resilient. Instead, setting a Timeout can limit its ability to handle unexpected delays or long-running processes. If you need to perform an action inside your Workflow after a specific period of time, we recommend using a Timer. Workflow timeouts are set when [starting the Workflow Execution](#workflow-timeouts). - **[Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout)** - restricts the maximum amount of time that a single Workflow Execution can be executed. - **[Workflow Run Timeout](/encyclopedia/detecting-workflow-failures#workflow-run-timeout):** restricts the maximum amount of time that a single Workflow Run can last. - **[Workflow Task Timeout](/encyclopedia/detecting-workflow-failures#workflow-task-timeout):** restricts the maximum amount of time that a Worker can execute a Workflow Task. Create an instance of `WorkflowOptions` in the Client code and set your timeout. Available timeouts are: - `withWorkflowExecutionTimeout()` - `withWorkflowRunTimeout()` - `withWorkflowTaskTimeout()` ```php $workflow = $this->workflowClient->newWorkflowStub( DynamicSleepWorkflowInterface::class, WorkflowOptions::new() ->withWorkflowId(DynamicSleepWorkflow::WORKFLOW_ID) ->withWorkflowIdReusePolicy(WorkflowIdReusePolicy::WORKFLOW_ID_REUSE_POLICY_ALLOW_DUPLICATE) // Set Workflow Timeout duration ->withWorkflowExecutionTimeout(CarbonInterval::minutes(2)) // ->withWorkflowRunTimeout(CarbonInterval::minute(2)) // ->withWorkflowTaskTimeout(CarbonInterval::minute(2)) ); ``` ### Workflow retries {#workflow-retries} A Retry Policy can work in cooperation with the timeouts to provide fine controls to optimize the execution experience. Use a [Retry Policy](/encyclopedia/retry-policies) to retry a Workflow Execution in the event of a failure. Workflow Executions do not retry by default, and Retry Policies should be used with Workflow Executions only in certain situations. A Retry Policy can be configured with an instance of the `RetryOptions` object. To enable retries for a Workflow, you need to provide a Retry Policy object via `ChildWorkflowOptions` for Child Workflows or via `WorkflowOptions` for top-level Workflows. ```php $workflow = $this->workflowClient->newWorkflowStub( CronWorkflowInterface::class, WorkflowOptions::new()->withRetryOptions( RetryOptions::new()->withInitialInterval(120) ) ); ``` ## How to set Activity timeouts {#activity-timeouts} Each Activity timeout controls the maximum duration of a different aspect of an Activity Execution. The following timeouts are available in the Activity Options. - **[Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout):** is the maximum amount of time allowed for the overall [Activity Execution](/activity-execution). - **[Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout):** is the maximum time allowed for a single [Activity Task Execution](/tasks#activity-task-execution). - **[Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout):** is the maximum amount of time that is allowed from when an [Activity Task](/tasks#activity-task) is scheduled to when a [Worker](/workers#worker) starts that Activity Task. An Activity Execution must have either the Start-To-Close or the Schedule-To-Close Timeout set. Because Activities are reentrant, only a single stub can be used for multiple Activity invocations. Available timeouts are: - withScheduleToCloseTimeout() - withStartToCloseTimeout() - withScheduleToStartTimeout() ```php $this->greetingActivity = Workflow::newActivityStub( GreetingActivityInterface::class, // Set Activity Timeout duration ActivityOptions::new() ->withScheduleToCloseTimeout(CarbonInterval::seconds(2)) // ->withStartToCloseTimeout(CarbonInterval::seconds(2)) // ->withScheduleToStartTimeout(CarbonInterval::seconds(10)) ); ``` ### How to set an Activity Retry Policy {#activity-retries} A Retry Policy works in cooperation with the timeouts to provide fine controls to optimize the execution experience. Activity Executions are automatically associated with a default [Retry Policy](/encyclopedia/retry-policies) if a custom one is not provided. To set an Activity Retry, set `{@link RetryOptions}` on `{@link ActivityOptions}`. The follow example creates a new Activity with the given options. ```php $this->greetingActivity = Workflow::newActivityStub( GreetingActivityInterface::class, ActivityOptions::new() ->withScheduleToCloseTimeout(CarbonInterval::seconds(10)) ->withRetryOptions( RetryOptions::new() ->withInitialInterval(CarbonInterval::seconds(1)) ->withMaximumAttempts(5) ->withNonRetryableExceptions([\InvalidArgumentException::class]) ) ); } ``` For an executable code sample, see [ActivityRetry sample](https://github.com/temporalio/samples-php/tree/master/app/src/ActivityRetry) in the PHP samples repository. ### How to set the required Activity Timeouts {#required-timeout} Activity Execution semantics rely on several parameters. The only required value that needs to be set is either a [Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout) or a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout). These values are set in the Activity Options. ## Activity next Retry delay {#activity-next-retry-delay} **How to override the next Retry delay following an Activity failure using the Temporal PHP SDK** You may throw an [`ApplicationFailure`](/references/failures#application-failure) with the `nextRetryDelay` field set. This value will replace and override whatever the retry interval would be on the retry policy. For example, if in an Activity, you want to base the interval on the number of attempts, you might do: ```php $attempt = \Temporal\Activity::getInfo()->attempt; throw new \Temporal\Exception\Failure\ApplicationFailure( message: "Something bad happened on attempt $attempt", type: 'my_failure_type', nonRetryable: false, nextRetryDelay: \DateInterval::createFromDateString(\sprintf('%d seconds', $attempt * 3)), ); ``` ## How to Heartbeat an Activity {#activity-heartbeats} An [Activity Heartbeat](/encyclopedia/detecting-activity-failures#activity-heartbeat) is a ping from the [Worker Process](/workers#worker-process) that is executing the Activity to the [Temporal Service](/temporal-service). Each Heartbeat informs the Temporal Service that the [Activity Execution](/activity-execution) is making progress and the Worker has not crashed. If the Temporal Service does not receive a Heartbeat within a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) time period, the Activity will be considered failed and another [Activity Task Execution](/tasks#activity-task-execution) may be scheduled according to the Retry Policy. Heartbeats may not always be sent to the Temporal Service—they may be [throttled](/encyclopedia/detecting-activity-failures#throttling) by the Worker. Activity Cancellations are delivered to Activities from the Temporal Service when they Heartbeat. Activities that don't Heartbeat can't receive a Cancellation. Heartbeat throttling may lead to Cancellation getting delivered later than expected. Heartbeats can contain a `details` field describing the Activity's current progress. If an Activity gets retried, the Activity can access the `details` from the last Heartbeat that was sent to the Temporal Service. Some Activities are long-running. To react to a crash quickly, use the Heartbeat mechanism, `Activity::heartbeat()`, which lets the Temporal Server know that the Activity is still alive. This acts as a periodic checkpoint mechanism for the progress of an Activity. You can piggyback `details` on an Activity Heartbeat. If an Activity times out, the last value of `details` is included in the `TimeoutFailure` delivered to a Workflow. Then the Workflow can pass the details to the next Activity invocation. Additionally, you can access the details from within an Activity via `Activity::getHeartbeatDetails`. When an Activity is retried after a failure `getHeartbeatDetails` enables you to get the value from the last successful Heartbeat. ```php use Temporal\Activity; class FileProcessingActivitiesImpl implements FileProcessingActivities { // ... public function download( string $bucketName, string $remoteName, string $localName ): void { $this->dowloader->downloadWithProgress( $bucketName, $remoteName, $localName, // on progress function ($progress) { Activity::heartbeat($progress); } ); Activity::heartbeat(100); // download complete // ... } // ... } ``` #### How to set a Heartbeat Timeout {#heartbeat-timeout} A [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) works in conjunction with [Activity Heartbeats](/encyclopedia/detecting-activity-failures#activity-heartbeat). Some Activities are long-running. To react to a crash quickly, use the Heartbeat mechanism, `Activity::heartbeat()`, which lets the Temporal Server know that the Activity is still alive. This acts as a periodic checkpoint mechanism for the progress of an Activity. You can piggyback `details` on an Activity Heartbeat. If an Activity times out, the last value of `details` is included in the `TimeoutFailure` delivered to a Workflow. Then the Workflow can pass the details to the next Activity invocation. Additionally, you can access the details from within an Activity via `Activity::getHeartbeatDetails`. When an Activity is retried after a failure `getHeartbeatDetails` enables you to get the value from the last successful Heartbeat. ```php use Temporal\Activity; class FileProcessingActivitiesImpl implements FileProcessingActivities { // ... public function download( string $bucketName, string $remoteName, string $localName ): void { $this->dowloader->downloadWithProgress( $bucketName, $remoteName, $localName, // on progress function ($progress) { Activity::heartbeat($progress); } ); Activity::heartbeat(100); // download complete // ... } // ... } ``` --- ## PHP SDK developer guide ![PHP SDK Banner](/img/assets/banner-php-temporal.png) :::info PHP SPECIFIC RESOURCES Build Temporal Applications with the PHP SDK. **Temporal PHP Technical Resources:** - [PHP SDK Quickstart - Setup Guide](/develop/php/set-up-your-local-php) - [PHP API Documentation](https://php.temporal.io) - [PHP SDK Code Samples](https://github.com/temporalio/samples-php) - [PHP SDK GitHub](https://github.com/temporalio/sdk-php) **Get Connected with the Temporal PHP Community:** - [Temporal PHP Community Slack](https://temporalio.slack.com/archives/C01LK9FAMM0) - [PHP SDK Forum](https://community.temporal.io/tag/php-sdk) ::: ## [Core Application](/develop/php/core-application) Use the essential components of a Temporal Application (Workflows, Activities, and Workers) to build and run a Temporal application. - [How to develop a basic Workflow](/develop/php/core-application#develop-workflows) - [How to develop a basic Activity](/develop/php/core-application#develop-activities) - [How to start an Activity Execution](/develop/php/core-application#activity-execution) - [How to run Worker Processes](/develop/php/core-application#run-a-dev-worker) ## [Temporal Client](/develop/php/temporal-client) Connect to a Temporal Service and start a Workflow Execution. - [How to connect a Temporal Client to a Temporal Service](/develop/php/temporal-client#connect-to-a-dev-cluster) - [How to connect a Temporal Client to a Temporal Cloud](/develop/php/temporal-client#connect-to-temporal-cloud) - [How to start a Workflow Execution](/develop/php/temporal-client#start-workflow-execution) - [Advanced connection options](/develop/php/temporal-client#advanced-connection-options) ## [Testing](/develop/php/testing-suite) Set up the testing suite to test Workflows and Activities. - [Testing Activities](/develop/php/testing-suite#test-activities) - [Testing Workflows](/develop/php/testing-suite#test-workflows) - [How to Replay a Workflow Execution](/develop/php/testing-suite#replay) ## [Failure detection](/develop/php/failure-detection) Explore how your application can detect failures using timeouts and automatically attempt to mitigate them with retries. - [Workflow timeouts](/develop/php/failure-detection#workflow-timeouts) - [How to set Activity timeouts](/develop/php/failure-detection#activity-timeouts) - [How to Heartbeat an Activity](/develop/php/failure-detection#activity-heartbeats) ## [Workflow message passing](/develop/php/message-passing) Send messages to read the state of Workflow Executions. - [How to develop with Signals](/develop/php/message-passing#signals) - [How to develop with Queries](/develop/php/message-passing#queries) - [How to develop with Updates](/develop/php/message-passing#updates) - [Message handler patterns](/develop/php/message-passing#message-handler-patterns) - [Message handler troubleshooting](/develop/php/message-passing#message-handler-troubleshooting) - [How to develop with Dynamic Handlers](/develop/php/message-passing#dynamic-handler) ## [Interrupt a Workflow feature guide](/develop/php/cancellation) Interrupt a Workflow Execution with a Cancel or Terminate action. - [Cancel an Activity from a Workflow](/develop/php/cancellation#cancel-an-activity) - [Reset a Workflow](/develop/php/cancellation#reset): Resume a Workflow Execution from an earlier point in its Event History. ## [Versioning](/develop/php/versioning) The PHP SDK [Versioning developer guide](/develop/php/versioning) shows how to Change Workflow Definitions without causing non-deterministic behavior in running Workflows. - [How to use the PHP SDK Patching API](/develop/php/versioning#php-sdk-patching-api): Patching Workflows using the PHP SDK. - [Sanity checking](/develop/php/versioning#runtime-checking) ## [Asynchronous Activity Completion](/develop/php/asynchronous-activity-completion) Complete Activities asynchronously. - [How to asynchronously complete an Activity](/develop/php/asynchronous-activity-completion#asynchronous-activity-completion) ## [Observability](/develop/php/observability) Configure and use the Temporal Observability APIs. - [How to log from a Workflow](/develop/php/observability#logging) - [How to use Visibility APIs](/develop/php/observability#visibility) ## [Debugging](/develop/php/debugging) Explore various ways to debug your application. - [Debugging](/develop/php/debugging#debug) ## [Schedules](/develop/php/schedules) Run Workflows on a schedule and delay the start of a Workflow. - [How to use Start Delay](/develop/php/schedules#start-delay) - [How to use Temporal Cron Jobs](/develop/php/schedules#temporal-cron-jobs) ## [Durable Timers](/develop/php/timers) Use Timers to make a Workflow Execution pause or "sleep" for seconds, minutes, days, months, or years. - [What is a Timer?](/develop/php/timers#timers) ## [Child Workflows](/develop/php/child-workflows) Explore how to spawn a Child Workflow Execution and handle Child Workflow Events. - [How to start a Child Workflow Execution](/develop/php/child-workflows#child-workflows) ## [Continue-As-New](/develop/php/continue-as-new) Continue the Workflow Execution with a new Workflow Execution using the same Workflow ID. - [How to Continue-As-New](/develop/php/continue-as-new) ## [Side Effects](/develop/php/side-effects) Use Side Effects in Workflows. - [How to use Side Effects in PHP](/develop/php/side-effects#side-effects) ## [Enriching the User Interface](/develop/php/enriching-ui) Add descriptive information to workflows and events for better visibility and context in the UI. - [Adding Summary and Details to Workflows](/develop/php/enriching-ui#adding-summary-and-details-to-workflows) --- ## Workflow message passing - PHP SDK ## How to develop with Signals {#signals} A [Signal](/sending-messages#sending-signals) is a message sent to a running Workflow Execution. Signals are defined in your code and handled in your Workflow Definition. Signals can be sent to Workflow Executions from a Temporal Client or from another Workflow Execution. ### How to define a Signal {#define-signal} A Signal has a name and can have arguments. - The name, also called a Signal type, is a string. - The arguments must be [serializable](/dataconversion). Workflows can answer synchronous [Queries](/sending-messages#sending-queries) and receive [Signals](/sending-messages#sending-signals). All interface methods must have one of the following attributes: - **#[WorkflowMethod]** indicates an entry point to a Workflow. It contains parameters that specify timeouts and a Task Queue name. Required parameters (such as `executionStartToCloseTimeoutSeconds`) that are not specified through the attribute must be provided at runtime. - **#[SignalMethod]** indicates a method that reacts to external signals. It must have a `void` return type. - **#[QueryMethod]** indicates a method that reacts to synchronous query requests. It must have a non `void` return type. > It is possible (though not recommended for usability reasons) to annotate concrete class implementation. You can have more than one method with the same attribute (except #[WorkflowMethod]). For example: ```php use Temporal\Workflow\WorkflowInterface; use Temporal\Workflow\WorkflowMethod; use Temporal\Workflow\SignalMethod; use Temporal\Workflow\QueryMethod; #[WorkflowInterface] interface FileProcessingWorkflow { #[WorkflowMethod] public function processFile(Argument $args); #[QueryMethod("history")] public function getHistory(): array; #[QueryMethod("status")] public function getStatus(): string; #[SignalMethod] public function retryNow(): void; #[SignalMethod] public function abandon(): void; } ``` Note that name parameter of Workflow method attributes can be used to specify name of Workflow, Signal and Query types. If name is not specified the short name of the Workflow interface is used. In the precedingcode the `#[WorkflowMethod(name)]` is not specified, thus the Workflow Type defaults to `"FileProcessingWorkflow"`. ### How to handle a Signal {#handle-signal} Workflows listen for Signals by the Signal's name. Use the `#[SignalMethod]` attribute to handle Signals in the Workflow interface: ```php use Temporal\Workflow; #[Workflow\WorkflowInterface] class YourWorkflow { private bool $value; #[Workflow\WorkflowMethod] public function run() { yield Workflow::await(fn()=> $this->value); return 'OK'; } #[Workflow\SignalMethod] public function setValue(bool $value) { $this->value = $value; } } ``` In the preceding example, the Workflow updates the protected value. The main Workflow coroutine waits for the value to change by using the `Workflow::await()` function. ### How to send a Signal from a Temporal Client {#send-signal-from-client} When a Signal is sent successfully from the Temporal Client, the [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the Event History of the Workflow that receives the Signal. To send a Signal to a Workflow Execution from a Client, call the Signal method, annotated with `#[SignalMethod]` in the Workflow interface, from the Client code. To send a Signal to a Workflow, use `WorkflowClient->newWorkflowStub` or `WorkflowClient->newUntypedWorkflowStub`: ```php $workflow = $workflowClient->newWorkflowStub(YourWorkflow::class); $run = $workflowClient->start($workflow); // do something $workflow->setValue(true); assert($run->getValue() === true); ``` Use `WorkflowClient->newRunningWorkflowStub` or `WorkflowClient->newUntypedRunningWorkflowStub` with Workflow Id to send Signals to already running Workflows. ```php $workflow = $workflowClient->newRunningWorkflowStub(YourWorkflow::class, 'workflowID'); $workflow->setValue(true); ``` See [Handle Signal](#handle-signal) for details on how to handle Signals in a Workflow. ### How to send a Signal from a Workflow {#send-signal-from-workflow} A Workflow can send a Signal to another Workflow, in which case it's called an _External Signal_. When an External Signal is sent: - A [SignalExternalWorkflowExecutionInitiated](/references/events#signalexternalworkflowexecutioninitiated) Event appears in the sender's Event History. - A [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the recipient's Event History. To send Signal to a Workflow use `WorkflowClient`->`newWorkflowStub` or `WorkflowClient`->`newUntypedWorkflowStub`: ```php $workflow = $workflowClient->newWorkflowStub(YourWorkflow::class); $run = $workflowClient->start($workflow); // do something $workflow->setValue(true); assert($run->getValue() === true); ``` Use `WorkflowClient`->`newRunningWorkflowStub` or `WorkflowClient->newUntypedRunningWorkflowStub` with Workflow Id to send Signals to a running Workflow. ```php $workflow = $workflowClient->newRunningWorkflowStub(YourWorkflow::class, 'workflowID'); $workflow->setValue(true); ``` ### How to Signal-With-Start {#signal-with-start} Signal-With-Start is used from the Client. It takes a Workflow Id, Workflow arguments, a Signal name, and Signal arguments. If there's a Workflow running with the given Workflow Id, it will be signaled. If there isn't, a new Workflow will be started and immediately signaled. In cases where you may not know if a Workflow is running, and want to send a Signal to it, use `startwithSignal`. If a running Workflow exists, the `startwithSignal` API sends the Signal. If there is no running Workflow, the API starts a new Workflow Run and delivers the Signal to it. ```php $workflow = $workflowClient->newWorkflowStub(YourWorkflow::class); $run = $workflowClient->startWithSignal( $workflow, 'setValue', [true], // signal arguments [] // start arguments ); ``` ## How to develop with Queries {#queries} A [Query](/sending-messages#sending-queries) is a synchronous operation that is used to get the state of a Workflow Execution. ### How to define a Query {#define-query} A Query has a name and can have arguments. - The name, also called a Query type, is a string. - The arguments must be [serializable](/dataconversion). Workflows can answer synchronous [Queries](/sending-messages#sending-queries) and receive [Signals](/sending-messages#sending-signals). All interface methods must have one of the following attributes: - **#[WorkflowMethod]** indicates an entry point to a Workflow. It contains parameters that specify timeouts and a Task Queue name. Required parameters (such as `executionStartToCloseTimeoutSeconds`) that are not specified through the attribute must be provided at runtime. - **#[SignalMethod]** indicates a method that reacts to external signals. It must have a `void` return type. - **#[QueryMethod]** indicates a method that reacts to synchronous query requests. It must have a non `void` return type. > It is possible (though not recommended for usability reasons) to annotate concrete class implementation. You can have more than one method with the same attribute (except #[WorkflowMethod]). For example: ```php use Temporal\Workflow\WorkflowInterface; use Temporal\Workflow\WorkflowMethod; use Temporal\Workflow\SignalMethod; use Temporal\Workflow\QueryMethod; #[WorkflowInterface] interface FileProcessingWorkflow { #[WorkflowMethod] public function processFile(Argument $args); #[QueryMethod("history")] public function getHistory(): array; #[QueryMethod("status")] public function getStatus(): string; #[SignalMethod] public function retryNow(): void; #[SignalMethod] public function abandon(): void; } ``` Note that name parameter of Workflow method attributes can be used to specify name of Workflow, Signal and Query types. If name is not specified the short name of the Workflow interface is used. In the precedingcode the `#[WorkflowMethod(name)]` is not specified, thus the Workflow Type defaults to `"FileProcessingWorkflow"`. ### How to handle a Query {#handle-query} Queries are handled by your Workflow. Don't include any logic that causes [Command](/workflow-execution#command) generation within a Query handler (such as executing Activities). Including such logic causes unexpected behavior. You can add custom Query types to handle Queries such as Querying the current state of a Workflow, or Querying how many Activities the Workflow has completed. To do this, you need to set up a Query handler using method attribute `QueryMethod` or `Workflow::registerQuery`. ```php #[Workflow\WorkflowInterface] class YourWorkflow { #[Workflow\QueryMethod] public function getValue() { return 42; } #[Workflow\WorkflowMethod] public function run() { // workflow code } } ``` The handler function can receive any number of input parameters, but all input parameters must be serializable. The following sample code sets up a Query handler that handles the Query type of `currentState`: ```php #[Workflow\WorkflowInterface] class YourWorkflow { private string $currentState; #[Workflow\QueryMethod('current_state')] public function getCurrentState(): string { return $this->currentState; } #[Workflow\WorkflowMethod] public function run() { // Your normal Workflow code begins here, and you update the currentState // as the code makes progress. $this->currentState = 'waiting timer'; try{ yield Workflow::timer(DateInterval::createFromDateString('1 hour')); } catch (\Throwable $e) { $this->currentState = 'timer failed'; throw $e; } $yourActivity = Workflow::newActivityStub( YourActivityInterface::class, ActivityOptions::new()->withScheduleToStartTimeout(60) ); $this->currentState = 'waiting activity'; try{ yield $yourActivity->doSomething('some input'); } catch (\Throwable $e) { $this->currentState = 'activity failed'; throw $e; } $this->currentState = 'done'; return null; } } ``` You can also issue a Query from code using the `QueryWorkflow()` API on a Temporal Client object. Use `WorkflowStub` to Query Workflow instances from your Client code (can be applied to both running and closed Workflows): ```php $workflow = $workflowClient->newWorkflowStub( YourWorkflow::class, WorkflowOptions::new() ); $workflowClient->start($workflow); var_dump($workflow->getCurrentState()); sleep(60); var_dump($workflow->getCurrentState()); ``` ### How to send a Query {#send-query} Queries are sent from a Temporal Client. ## How to develop with Updates {#updates} An [Update](/sending-messages#sending-updates) is an operation that can mutate the state of a Workflow Execution and return a response. ### Define Update {#define-update} **How to define an Update using the PHP SDK.** Workflow Updates handlers are methods in your Workflow Definition designed to handle updates. These updates can be triggered during the lifecycle of a Workflow Execution. An Update handler has a name, arguments, response, and an optional validator. - The name, also called an Update type, is a string. - The arguments and response must be [serializable](/dataconversion). The [`#[UpdateMethod]`](https://php.temporal.io/classes/Temporal-Workflow-UpdateMethod.html) attribute indicates that the method is used to handle and respond to update requests. ```php #[UpdateMethod] public function myUpdate(string $value); ``` ### Handle Update {#handle-updates} **How to handle Updates in a Workflow using the PHP SDK.** Workflows listen for Update by the Update's name. Use the `#[UpdateMethod]` attribute to handle Updates in the Workflow interface. The handler method can accept multiple serializable input parameters, but it's recommended using only a single parameter. The function can return a [serializable](/dataconversion) value or `void`. ```php #[WorkflowInterface] interface FileProcessingWorkflow { #[WorkflowMethod] #[ReturnType(ProcessResult::class)] public function processFile(File $file); #[UpdateMethod] public function pauseProcessing(): void; } ``` Update handlers, unlike Query handlers, can change Workflow state. The Updates type defaults to the name of the method. To overwrite this default naming and assign a custom Update type, use the `#[UpdateMethod]` attribute with the `name` parameter. ```php #[WorkflowInterface] interface FileProcessingWorkflow { #[WorkflowMethod] public function processFiles(FileList $files); #[UpdateMethod(name: 'pause')] public function pauseProcessing(): void; } ``` **Register Update Handler dynamically** You can register Update handlers dynamically using the `Workflow::registerUpdate()` method. The third argument is an optional Update validator. The validator function must have the same parameters as the handler and throw an exception if the validation fails. ```php Workflow::registerUpdate( name: 'pause', handler: fn() => $this->paused = true, validator: fn() => $this->paused === false or throw new \Exception('Workflow is already paused'), ); ``` ### Validate Update {#validate-an-update} **How to validate Updates in a Workflow using the PHP SDK.** Validate certain aspects of the data sent to the Workflow using an Update Validator method. For instance, a counter Workflow might never want to accept a non-positive number. Use the [`#[UpdateValidatorMethod]`](https://php.temporal.io/classes/Temporal-Workflow-UpdateValidatorMethod.html) attribute and set the `forUpdate` argument to the name of your Update handler. Your Update Validator should accept the same input parameters as your Update Handler and return `void`. ```php #[WorkflowInterface] interface GreetingWorkflow { #[WorkflowMethod] public function getGreetings(): array; #[UpdateMethod] public function addGreeting(string $name): int; #[UpdateValidatorMethod(forUpdate: 'addGreeting')] public function addGreetingValidator(string $name): void; } ``` ### Send Update from a Client {#send-update-from-client} **How to send an Update to a Workflow Execution from a Temporal Client using the PHP SDK.** To send an Update to a Workflow Execution from a Client, call the Update method, annotated with `#[UpdateMethod]` in the Workflow interface, from the Client code. In the following Client code example, start the Workflow `getGreetings` and call the Update method `addGreeting` that is handled in the Workflow. ```php // Create a typed Workflow stub for GreetingsWorkflow $workflow = $workflowClient->newWorkflowStub(GreetingWorkflow::class, $workflowOptions); // Start the Workflow $run = $workflowClient->start($workflow); // Send an update to the Workflow. addGreeting returns // the number of greetings our workflow has received. $count = $workflow->addGreeting("World"); ``` **Async accept** In Workflow Update methods, all Workflow features are available, such as executing Activities and child Workflows, and waiting on timers/conditions. In cases where it's known that the update will take a long time to execute, or you are not interested in the outcome of its execution, you can use the stub method [`startUpdate`](https://php.temporal.io/classes/Temporal-Client-WorkflowStubInterface.html#method_startUpdate) and move on immediately after receiving the validation result. Note that the processing Workflow Worker must be available. Otherwise, the request may block indefinitely or fail due to a timeout. ```php use Ramsey\Uuid\UuidInterface; use Temporal\Client\Update\UpdateOptions; use Temporal\Client\Update\WaitPolicy; use Temporal\Client\Update\LifecycleStage; // Create an untyped Workflow stub for GreetingsWorkflow $stub = $client->newUntypedWorkflowStub('GreetingWorkflow', $workflowOptions); // Start the Workflow $run = $client->start($stub); // Send an Update to the Workflow. UpdateHandle returns $handle = $stub->startUpdate('addGreeting', 'World'); // Use the UpdateHandle to get the Update result with timeout 2.5 seconds $result = $handle->getResult(timeout: 2.5); // You can get more control using UpdateOptions $resultUuid = $stub->startUpdate( UpdateOptions::new('storeGreetings', LifecycleStage::StageCompleted) ->withResultType(UuidInterface::class) )->getResult(); ``` #### Update-With-Start {#update-with-start} [Update-with-Start](/sending-messages#update-with-start) lets you [send an Update](#send-update-from-client) that checks whether an already-running Workflow with that ID exists: - If the Workflow exists, the Update is processed. - If the Workflow does not exist, a new Workflow Execution is started with the given ID, and the Update is processed before the main Workflow method starts to execute. You can: - Use the [`updateWithStart`](https://php.temporal.io/classes/Temporal-Client-WorkflowClientInterface.html#method_updateWithStart) WorkflowClient API. It returns once the requested Update wait stage has been reached; or when the request times out. - Use the [`UpdateHandle`](https://php.temporal.io/classes/Temporal-Client-Update-UpdateHandle.html) to retrieve a result from the Update. You provide: - A WorkflowStub created from [`WorkflowOptions`](https://php.temporal.io/classes/Temporal-Client-WorkflowOptions.html). - The `WorkflowOptions` require a [Workflow Id Conflict Policy](/workflow-execution/workflowid-runid#workflow-id-conflict-policy) to be specified. - Choose ["Use Existing"](https://php.temporal.io/classes/Temporal-Common-WorkflowIdConflictPolicy.html#enumcase_UseExisting) and use an idempotent Update handler to ensure your code can be executed again in case of a Client failure. Not all `WorkflowOptions` are allowed. For example, specifying a Cron Schedule will result in an error. - Update name or [`UpdateOptions`](https://php.temporal.io/classes/Temporal-Client-Update-UpdateOptions.html). This mirrors the approach used for [Update Workflow](#send-update-from-client). - For Update-with-Start, the Workflow Id is optional. - When specified, the Id must match the one used in `WorkflowOptions`. - Since a running Workflow Execution may not already exist, you can't set a Run Id. For example: ```php $stub = $workflowClient->newUntypedWorkflowStub( ShoppingCartWorkflow::class, WorkflowOptions::new() ->withTaskQueue('service-queue') ->withWorkflowId($cartId) ->withWorkflowIdConflictPolicy(WorkflowIdConflictPolicy::UseExisting), ); $handle = $workflowClient->updateWithStart( workflow: $stub, update: 'addItem', updateArgs: [$itemId, $quantity], ); $price = $handle->getResult(); ``` To wait on the Update result, run the Update with the wait stage set to [`LifecycleStage::StageCompleted`](https://php.temporal.io/classes/Temporal-Client-Update-LifecycleStage.html#enumcase_StageCompleted). This returns once the update result is available; or when the API call times out. For example: ```php $handle = $workflowClient->updateWithStart( workflow: $stub, update: UpdateOptions::new('addItem', LifecycleStage::StageCompleted), updateArgs: [$itemId, $quantity], ); assert($handle->hasResult() === true); $price = $handle->getResult(); ``` ## Message handler patterns {#message-handler-patterns} This section covers common write operations, such as Signal and Update handlers. It doesn't apply to pure read operations, like Queries or Update Validators. :::tip For additional information, see [Inject work into the main Workflow](/handling-messages#injecting-work-into-main-workflow), [Ensuring your messages are processed exactly once](/handling-messages#exactly-once-message-processing), and [this sample](https://github.com/temporalio/samples-php/tree/master/app/src/SafeMessageHandlers) demonstrating safe `async` message handling. ::: ### Add wait conditions to block Sometimes, async Signal or Update handlers need to meet certain conditions before they should continue. You can use a wait condition ([`Workflow::await()`](https://php.temporal.io/classes/Temporal-Workflow.html#method_await)) to set a function that prevents the code from proceeding until the condition returns `true`. This is an important feature that helps you control your handler logic. Here are two important use cases for `Workflow::await()`: - Waiting in a handler until it is appropriate to continue. - Waiting in the main Workflow until all active handlers have finished. The condition state you're waiting for can be updated by and reflect any part of the Workflow code. This includes the main Workflow method, other handlers, or child coroutines spawned by the main Workflow method (see [`Workflow::async()`](https://php.temporal.io/classes/Temporal-Workflow.html#method_async). ### Use wait conditions in handlers It's common to use a Workflow wait condition to wait until a handler should start. You can also use wait conditions anywhere else in the handler to wait for a specific condition to become `true`. This allows you to write handlers that pause at multiple points, each time waiting for a required condition to become `true`. Consider a `readyForUpdateToExecute` method that runs before your Update handler executes. The `Workflow::await` method waits until your condition is met: ```php #[UpdateMethod] public function myUpdate(UpdateInput $input) { yield Workflow::await( fn() => $this->readyForUpdateToExecute($input), ); // ... } ``` Remember: Handlers can execute before the main Workflow method starts. ### Ensure your handlers finish before the Workflow completes {#wait-for-message-handlers} Workflow wait conditions can ensure your handler completes before a Workflow finishes. When your Workflow uses async Signal or Update handlers, your main Workflow method can return or continue-as-new while a handler is still waiting on an async task, such as an Activity result. The Workflow completing may interrupt the handler before it finishes crucial work and cause client errors when trying retrieve Update results. Use [`Workflow::await()`](https://php.temporal.io/classes/Temporal-Workflow.html#method_await) and [`Workflow::allHandlersFinished()`](https://php.temporal.io/classes/Temporal-Workflow.html#method_allHandlersFinished) to address this problem and allow your Workflow to end smoothly: ```php #[WorkflowInterface] class MyWorkflow { #[WorkflowMethod] public function run() { // ... yield Workflow::await(fn() => Workflow::allHandlersFinished()); return "workflow-result"; } } ``` By default, your Worker will log a warning when you allow a Workflow Execution to finish with unfinished handler executions. You can silence these warnings on a per-handler basis by passing the `unfinishedPolicy` argument to the [`UpdateMethod`](https://php.temporal.io/classes/Temporal-Workflow-UpdateMethod.html) / [`SignalMethod`](https://php.temporal.io/classes/Temporal-Workflow-SignalMethod.html) attribute: ```php #[UpdateMethod(unfinishedPolicy: HandlerUnfinishedPolicy::Abandon)] public function myUpdate() { // ... } ``` See [Finishing handlers before the Workflow completes](/handling-messages#finishing-message-handlers) for more information. ### Use `#[WorkflowInit]` to operate on Workflow input before any handler executes Normally, your Workflows constructor won't have any parameters. However, if you use the `#[WorkflowInit]` attribute on your constructor, you can give it the same [Workflow parameters](/develop/php/core-application#workflow-parameters) as your `#[WorkflowMethod]`. The SDK will then ensure that your constructor receives the Workflow input arguments that the [Client sent](/develop/php/temporal-client#start-workflow-execution). The Workflow input arguments are also passed to your `#[WorkflowMethod]` method -- that always happens, whether or not you use the `#[WorkflowInit]` attribute. This is useful if you have message handlers that need access to Workflow input: see [Initializing the Workflow first](/sending-messages). Here's an example. Notice that the constructor and `getGreeting` must have the same parameters: ```php use Temporal\Workflow; #[Workflow\WorkflowInterface] class GreetingExample { private readonly string $nameWithTitle; private bool $titleHasBeenChecked; // Note the attribute is on a public constructor #[Workflow\WorkflowInit] public function __construct(string $input) { $this->nameWithTitle = 'Sir ' . $input; $this->titleHasBeenChecked = false; } #[Workflow\WorkflowMethod] public function getGreeting(string $input) { yield Workflow::await(fn() => $this->titleHasBeenChecked); return "Hello " . $this->nameWithTitle; } #[Workflow\UpdateMethod] public function checkTitleValidity() { // 👉 The handler is now guaranteed to see the workflow input // after it has been processed by the constructor. $isValid = yield Workflow::executeActivity('activity.checkTitleValidity', [$this->nameWithTitle]); $this->titleHasBeenChecked = true; return $isValid; } } ``` :::note By default, the Workflow Handler runs before Signals and Updates in PHP SDK v2. This behavior is incorrect. To avoid breaking already written Workflows, since PHP SDK v2.11.0, a [feature flag](https://php.temporal.io/classes/Temporal-Worker-FeatureFlags.html#property_workflowDeferredHandlerStart) was added to enhance the behavior of the Workflow Handler. Make sure to set this flag to `true` to enable the correct behavior. ::: ### Use `Mutex` to prevent concurrent handler execution {#control-handler-concurrency} Concurrent processes can interact in unpredictable ways. Incorrectly written [concurrent message-passing](/handling-messages#message-handler-concurrency) code may not work correctly when multiple handler instances run simultaneously. Here's an example of a pathological case: ```php use Temporal\Workflow; #[Workflow\WorkflowInterface] class MyWorkflow { // ... #[Workflow\SignalMethod] public function badAsyncHandler() { $data = yield Workflow::executeActivity( type: 'fetch_data', args: ['url' => 'http://example.com'], options: ActivityOptions::new()->withStartToCloseTimeout('10 seconds'), ); $this->x = $data->x; # 🐛🐛 Bug!! If multiple instances of this handler are executing concurrently, then # there may be times when the Workflow has $this->x from one Activity execution and $this->y from another. yield Workflow::timer(1); # or await anything else $this->y = $data->y; } } ``` Coordinating access using `Mutex` corrects this code. Locking makes sure that only one handler instance can execute a specific section of code at any given time: ```php use Temporal\Workflow; #[Workflow\WorkflowInterface] class MyWorkflow { // ... private Workflow\Mutex $mutex; public function __construct() { $this->mutex = new Workflow\Mutex(); } #[Workflow\SignalMethod] public function safeAsyncHandler() { $data = yield Workflow::executeActivity( type: 'fetch_data', args: ['url' => 'http://example.com'], options: ActivityOptions::new()->withStartToCloseTimeout('10 seconds'), ); yield Workflow::runLocked($this->mutex, function () use ($data) { $this->x = $data->x; # ✅ OK: the scheduler may switch now to a different handler execution, or to the main workflow # method, but no other execution of this handler can run until this execution finishes. yield Workflow::timer(1); # or await anything else $this->y = $data->y; }); } ``` ## Message handler troubleshooting {#message-handler-troubleshooting} When sending a Signal, Update, or Query to a Workflow, your Client might encounter the following errors: - **The client can't contact the server**: You'll receive a [`ServiceClientException`](https://php.temporal.io/classes/Temporal-Exception-Client-ServiceClientException.html) in case of a server connection error. [How to configure RPC Retry Policy](/develop/php/temporal-client#configure-rpc-retry-policy) - **RPC timout**: You'll receive a [`TimeoutException`](https://php.temporal.io/classes/Temporal-Exception-Client-TimeoutException.html) in case of an RPC timeout. [How to configure RPC timeout](/develop/php/temporal-client#configure-rpc-timeout) - **The workflow does not exist**: You'll receive a [`WorkflowNotFoundException`](https://php.temporal.io/classes/Temporal-Exception-Client-WorkflowNotFoundException.html) exception. See [Exceptions in message handlers](/handling-messages#exceptions) for a non–PHP-specific discussion of this topic. ### Problems when sending a Signal {#signal-problems} When using Signal, the only exception that will result from your requests during its execution is `ServiceClientException`. All handlers may experience additional exceptions during the initial (pre-Worker) part of a handler request lifecycle. For Queries and Updates, the client waits for a response from the Worker. If an issue occurs during the handler Execution by the Worker, the client may receive an exception. ### Problems when sending an Update {#update-problems} When working with Updates, you may encounter these errors: - **No Workflow Workers are polling the Task Queue**: Your request will be retried by the SDK Client indefinitely. You can [configure RPC timeout](/develop/php/temporal-client#configure-rpc-timeout) to impose a timeout. This raises a [`WorkflowUpdateRPCTimeoutOrCanceledException`](https://php.temporal.io/classes/Temporal-Exception-Client-WorkflowUpdateRPCTimeoutOrCanceledException.html). - **Update failed**: You'll receive a [`WorkflowUpdateException`](https://php.temporal.io/classes/Temporal-Exception-Client-WorkflowUpdateException.html) exception. There are two ways this can happen: - The Update was rejected by an Update validator defined in the Workflow alongside the Update handler. - The Update failed after having been accepted. Update failures are like [Workflow failures](/references/failures#errors-in-workflows). Issues that cause a Workflow failure in the main method also cause Update failures in the Update handler. These might include: - A failed Child Workflow - A failed Activity (if the Activity retries have been set to a finite number) - The Workflow author raising `ApplicationFailure` - **The handler caused the Workflow Task to fail**: A [Workflow Task Failure](/references/failures#errors-in-workflows) causes the server to retry Workflow Tasks indefinitely. What happens to your Update request depends on its stage: - If the request hasn't been accepted by the server, you receive a [`WorkflowUpdateException`](https://php.temporal.io/classes/Temporal-Exception-Client-WorkflowUpdateException.html). - If the request has been accepted, it is durable. Once the Workflow is healthy again after a code deploy, use an [`UpdateHandle`](https://php.temporal.io/classes/Temporal-Client-Update-UpdateHandle.html) to fetch the Update result. - **The Workflow finished while the Update handler execution was in progress**: You'll receive a [`WorkflowUpdateException`](https://php.temporal.io/classes/Temporal-Exception-Client-WorkflowUpdateException.html). This happens if the Workflow finished while the Update handler execution was in progress, for example because - The Workflow was canceled or failed. - The Workflow completed normally or continued-as-new and the Workflow author did not [wait for handlers to be finished](/handling-messages#finishing-message-handlers). ### Problems when sending a Query {#query-problems} When working with Queries, you may encounter these errors: - **There is no Workflow Worker polling the Task Queue**: You'll receive a [`WorkflowNotFoundException`](https://php.temporal.io/classes/Temporal-Exception-Client-WorkflowNotFoundException.html). - **Query failed**: You'll receive a [`WorkflowQueryException`](https://php.temporal.io/classes/Temporal-Exception-Client-WorkflowQueryException.html) if something goes wrong during a Query. Any exception in a Query handler will trigger this error. This differs from Signal and Update requests, where exceptions can lead to Workflow Task Failure instead. - **The handler caused the Workflow Task to fail.** This would happen, for example, if the Query handler blocks the thread for too long without yielding. ## Dynamic components {#dynamic-handler} Temporal supports Dynamic Queries, Signals, and Updates. These are unnamed handlers that are invoked if no other statically defined handler with the given name exists. Dynamic Handlers provide flexibility to handle cases where the names of Queries, Signals, or Updates aren't known at run time. :::caution Dynamic Handlers should be used judiciously as a fallback mechanism rather than the primary approach. Overusing them can lead to maintainability and debugging issues down the line. Instead, Signals, or Queries should be defined statically whenever possible, with clear names that indicate their purpose. Use static definitions as the primary way of structuring your Workflows. Reserve Dynamic Handlers for cases where the handler names are not known at development time and need to be looked up dynamically at runtime. They are meant to handle edge cases and act as a catch-all, not as the main way of invoking logic. ::: ### How to set a Dynamic Query {#set-a-dynamic-query} A Dynamic Query in Temporal is a Query method that is invoked dynamically at runtime if no other Query with the same name is registered. Use [`Workflow::registerDynamicQuery()`](https://php.temporal.io/classes/Temporal-Workflow.html#method_registerDynamicQuery) to set a dynamic Query handler. The Query Handler parameters must accept a `string` name and [`ValuesInterface`](https://php.temporal.io/classes/Temporal-DataConverter-ValuesInterface.html) for the arguments. ```php Workflow::registerDynamicQuery(function (string $name, ValuesInterface $arguments): string { return \sprintf( 'Got query `%s` with %d arguments', $name, $arguments->count(), ); }); ``` ### How to set a Dynamic Signal {#set-a-dynamic-signal} A Dynamic Signal in Temporal is a Signal that is invoked dynamically at runtime if no other Signal with the same input is registered. Use [`Workflow::registerDynamicSignal()`](https://php.temporal.io/classes/Temporal-Workflow.html#method_registerDynamicSignal) to set a dynamic Signal handler. The Signal Handler parameters must accept a `string` name and [`ValuesInterface`](https://php.temporal.io/classes/Temporal-DataConverter-ValuesInterface.html) for the arguments. ```php Workflow::registerDynamicSignal(function (string $name, ValuesInterface $arguments): void { Workflow::getLogger()->info(\sprintf( 'Executed signal `%s` with %d arguments', $name, $arguments->count(), )); }); ``` ### How to set a Dynamic Update {#set-a-dynamic-update} A Dynamic Update in Temporal is an Update that is invoked dynamically at runtime if no other Update with the same input is registered. Use [`Workflow::registerDynamicUpdate()`](https://php.temporal.io/classes/Temporal-Workflow.html#method_registerDynamicUpdate) to set a dynamic Update handler. The method accepts two arguments: - Update Handler - Update Validator (optional) that should throw an exception if the validation fails Both the Handler and the Validator must accept a `string` name and [`ValuesInterface`](https://php.temporal.io/classes/Temporal-DataConverter-ValuesInterface.html) for the arguments. ```php Workflow::registerDynamicUpdate( static fn(string $name, ValuesInterface $arguments): string => \sprintf( 'Got update `%s` with %d arguments', $name, $arguments->count(), ), static fn(string $name, ValuesInterface $arguments) => \str_starts_with( $name, 'update_', ) or throw new \InvalidArgumentException('Invalid update name'), ); ``` --- ## Observability - PHP SDK The observability section of the Temporal Developer's guide covers the many ways to view the current state of your [Temporal Application](/temporal#temporal-application)—that is, ways to view which [Workflow Executions](/workflow-execution) are tracked by the [Temporal Platform](/temporal#temporal-platform) and the state of any specified Workflow Execution, either currently or at points of an execution. This section covers features related to viewing the state of the application, including: - [Log from a Workflow](#logging) - [Visibility](#visibility) ## Log from a Workflow {#logging} Logging enables you to record critical information during code execution. Loggers create an audit trail and capture information about your Workflow's operation. An appropriate logging level depends on your specific needs. During development or troubleshooting, you might use debug or even trace. In production, you might use info or warn to avoid excessive log volume. The logger supports the following logging levels: | Level | Use | | ------- | --------------------------------------------------------------------------------------------------------- | | `TRACE` | The most detailed level of logging, used for very fine-grained information. | | `DEBUG` | Detailed information, typically useful for debugging purposes. | | `INFO` | General information about the application's operation. | | `WARN` | Indicates potentially harmful situations or minor issues that don't prevent the application from working. | | `ERROR` | Indicates error conditions that might still allow the application to continue running. | The Temporal SDK core normally uses `WARN` as its default logging level. To get a PSR-3 compatible logger in your Workflow code, use the [`Workflow::getLogger()`](https://php.temporal.io/classes/Temporal-Workflow.html#method_getLogger) method. ```php use Temporal\Workflow; #[Workflow\WorkflowInterface] class MyWorkflow { #[Workflow\WorkflowMethod] public function execute(string $param): \Generator { Workflow::getLogger()->info('Workflow started', ['parameter' => $param]); // Your workflow implementation Workflow::getLogger()->info('Workflow completed'); return 'Done'; } } ``` The Workflow logger automatically enriches log context with the current Task Queue name. Logs in replay mode are omitted unless the [`enableLoggingInReplay`](https://php.temporal.io/classes/Temporal-Worker-WorkerOptions.html#method_withEnableLoggingInReplay) Worker option is set to true. ```php $factory = WorkerFactory::create(); $worker = $factory->newWorker('your-task-queue', WorkerOptions::new() ->withEnableLoggingInReplay(true) ); ``` ### Default Logger By default, PHP SDK uses a [`StderrLogger`](https://php.temporal.io/classes/Temporal-Worker-Logger-StderrLogger.html) that outputs log messages to the standard error stream. These messages are automatically captured by RoadRunner and incorporated into its logging system with the INFO level, ensuring proper log collection in both development and production environments. For more details on RoadRunner's logging capabilities, see the [RoadRunner Logger documentation](https://docs.roadrunner.dev/docs/logging-and-observability/logger). ### How to provide a custom logger {#custom-logger} You can set a custom PSR-3 compatible logger when creating a Worker: ```php $myLogger = new MyLogger(); $workerFactory = WorkerFactory::create(converter: $converter); $worker = $workerFactory->newWorker( taskQueue: 'my-task-queue', logger: $myLogger, ); ``` ## Visibility APIs {#visibility} The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Temporal Service. ### How to use Search Attributes {#search-attributes} The typical method of retrieving a Workflow Execution is by its Workflow Id. However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments. You can do this with [Search Attributes](/search-attribute). - [Default Search Attributes](/search-attribute#default-search-attribute) like `WorkflowType`, `StartTime` and `ExecutionStatus` are automatically added to Workflow Executions. - _Custom Search Attributes_ can contain their own domain-specific data (like `customerId` or `numItems`). - A few [generic Custom Search Attributes](/search-attribute#custom-search-attribute) like `CustomKeywordField` and `CustomIntField` are created by default in Temporal's [Docker Compose](https://github.com/temporalio/docker-compose). The steps to using custom Search Attributes are: - Create a new Search Attribute in your Temporal Service using `temporal operator search-attribute create` or the Cloud UI. - Set the value of the Search Attribute for a Workflow Execution: - On the Client by including it as an option when starting the Execution. - In the Workflow by calling `UpsertSearchAttributes`. - Read the value of the Search Attribute: - On the Client by calling `DescribeWorkflow`. - In the Workflow by looking at `WorkflowInfo`. - Query Workflow Executions by the Search Attribute using a [List Filter](/list-filter): - [In the Temporal CLI](/cli/workflow#list). - In code by calling `ListWorkflowExecutions`. Here is how to query Workflow Executions: Use the [listWorkflowExecutions()](https://php.temporal.io/classes/Temporal-Client-WorkflowClientInterface.html#method_listWorkflowExecutions) method on the Client and pass a [List Filter](/list-filter) as an argument to filter the listed Workflows. The result is an iterable paginator, so you can use the `foreach` loop to iterate over the results. ```php $paginator = $workflowClient->listWorkflowExecutions('WorkflowType="GreetingWorkflow"'); foreach ($paginator as $info) { echo "Workflow ID: {$info->execution->getID()}\n"; } ``` ### How to set custom Search Attributes {#custom-search-attributes} After you've created custom Search Attributes in your Temporal Service (using `temporal operator search-attribute create` or the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow. To set custom Search Attributes, use the `withTypedSearchAttributes` method on `WorkflowOptions` for a Workflow stub. Typed search attributes are a `TypedSearchAttributes` collection. ```php $keyDestinationTime = SearchAttributeKey::forDatetime('DestinationTime'); $keyOrderId = SearchAttributeKey::forKeyword('OrderId'); $workflow = $workflowClient->newWorkflowStub( OrderWorkflowInterface::class, WorkflowOptions::new() ->withWorkflowExecutionTimeout('10 minutes') ->withTypedSearchAttributes( TypedSearchAttributes::empty() ->withValue($keyOrderId, $orderid) ->withValue($keyDestinationTime, new \DateTimeImmutable('2028-11-05T00:10:07Z')) ), ); ``` ### How to upsert Search Attributes {#upsert-search-attributes} Within the Workflow code, you can dynamically add or update Search Attributes using [`upsertTypedSearchAttributes`](https://php.temporal.io/classes/Temporal-Workflow.html#method_upsertTypedSearchAttributes). This method is particularly useful for Workflows whose attributes need to change based on internal logic or external events. ```php #[Workflow\UpdateMethod] public function postponeDestinationTime(\DateInterval $interval) { // Get the key for the DestinationTime attribute $keyDestinationTime = SearchAttributeKey::forDatetime('DestinationTime'); /** @var DateTimeImmutable $destinationTime */ $destinationTime = Workflow::getInfo()->typedSearchAttributes->get($keyDestinationTime); Workflow::upsertTypedSearchAttributes( $keyDestinationTime->valueSet($destinationTime->add($interval)), ); } ``` ### How to remove a Search Attribute from a Workflow {#remove-search-attribute} To remove a Search Attribute that was previously set, set it to an empty Map. ```php #[Workflow\UpdateMethod] public function unsetDestinationTime() { // Get the key for the DestinationTime attribute $keyDestinationTime = SearchAttributeKey::forDatetime('DestinationTime'); Workflow::upsertTypedSearchAttributes( $keyDestinationTime->valueUnset(), ); } ``` --- ## Schedules - PHP SDK This page shows how to do the following: - [How to use Start Delay](#start-delay) - [How to use Temporal Cron Jobs](#temporal-cron-jobs) ## How to use Start Delay {#start-delay} Use the Workflow [Start Delay](/workflow-execution/timers-delays) functionality if you need to delay the execution of the Workflow without the need for regular launches. Here you simply specify the time to wait before dispatching the first Workflow task. ```php $workflow = $workflowClient->newWorkflowStub( GreeterWorkflowInterface::class, WorkflowOptions::new() ->withWorkflowStartDelay(CarbonInterval::minutes(10)), ); $workflowClient->start($workflow, 'Hello world!'); ``` ## How to use Temporal Cron Jobs {#temporal-cron-jobs} :::caution Cron support is not recommended We recommend using [Schedules](https://docs.temporal.io/schedule) instead of Cron Jobs. Schedules were built to provide a better developer experience, including more configuration options and the ability to update or pause running Schedules. ::: A [Temporal Cron Job](/cron-job) is the series of Workflow Executions that occur when a Cron Schedule is provided in the call to spawn a Workflow Execution. A Cron Schedule is provided as an option when the call to spawn a Workflow Execution is made. Set your Cron Schedule with `CronSchedule('* * * * *')`. Temporal Workflow Schedule Cron strings follow this format: ``` ┌───────────── minute (0 - 59) │ ┌───────────── hour (0 - 23) │ │ ┌───────────── day of the month (1 - 31) │ │ │ ┌───────────── month (1 - 12) │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday) │ │ │ │ │ * * * * * ``` The following example sets a Cron Schedule in PHP: ```php $workflow = $this->workflowClient->newWorkflowStub( CronWorkflowInterface::class, WorkflowOptions::new() ->withWorkflowId(CronWorkflowInterface::WORKFLOW_ID) ->withCronSchedule('* * * * *') // Execution timeout limits total time. Cron will stop executing after this timeout. ->withWorkflowExecutionTimeout(CarbonInterval::minutes(10)) // Run timeout limits duration of a single workflow invocation. ->withWorkflowRunTimeout(CarbonInterval::minute(1)) ); $output->writeln("Starting CronWorkflow... "); try { $run = $this->workflowClient->start($workflow, 'Antony'); // ... } ``` Setting `withCronSchedule` turns the Workflow Execution into a Temporal Cron Job. For more information, see the [PHP samples](https://github.com/temporalio/samples-php/tree/master/app/src/Cron) for example code or the PHP SDK `WorkflowOptions` [source code](https://github.com/temporalio/sdk-php/blob/master/src/Client/WorkflowOptions.php). :::tip Schedule Auto-Deletion Once a Schedule has completed creating all its Workflow Executions, the Temporal Service deletes it since it won’t fire again. The Temporal Service doesn't guarantee when this removal will happen. ::: --- ## Set up your local development with the PHP SDK --- # Quickstart Configure your local development environment to get started developing with Temporal. php -v }> ## Install PHP Make sure you have PHP installed. **If you don't have PHP:** Visit the official website to [download and install](https://www.php.net/downloads.php) it. ### GRPC extension GRPC extension is required to work with RoadRunner application server. **If you don't have `ext-grpc` installed:** Visit the official website to [download and install](https://docs.cloud.google.com/php/docs/reference/help/grpc) it. :::tip GRPC Installation Tip On macOS with Apple Silicon (M1/M2/M3/M4) and PHP 8.3, `pecl install grpc` may appear to hang or install indefinitely. If this happens, try installing a specific version: ```bash pecl install channel://pecl.php.net/grpc-1.78.0RC2 ``` Note: You can find the latest versions at [pecl.php.net/package/grpc](https://pecl.php.net/package/grpc). ::: mkdir temporal-hello-world cd temporal-hello-world composer init --name="myproject/quickstart" -n }> ## Create a Project Now that you have PHP installed, create a project to manage your dependencies and build your Temporal application. composer require temporal/sdk {`{ "name": "myproject/quickstart", "require": { "temporal/sdk": "^2.16" }, "autoload": { "psr-4": { "App\\\\": "src/" } } }`} composer dump-autoload }> ## Add Temporal PHP SDK and Configure Autoloading Install the Temporal SDK, then add PSR-4 autoloading to your `composer.json` so PHP can find your Workflow and Activity classes. Your final `composer.json` should look like this. After updating, run `composer dump-autoload` to regenerate the autoloader. Download RoadRunner with the following command: ./vendor/bin/rr get When prompted "Do you want create default '.rr.yaml' configuration file?", answer yes. You'll replace it with the proper config in the next step. Install DLoad package manager using Composer composer require --dev internal/dload Create a configuration file named `dload.xml` with the following content: {` `} Finally, download the RoadRunner binary: ./vendor/bin/dload }> ## Install RoadRunner application server Install [RoadRunner application server](https://github.com/roadrunner-server/roadrunner). It starts and manages your PHP processes that run Temporal Workers, and connects them to the Temporal Service over gRPC. See [RoadRunner installation instructions](https://docs.roadrunner.dev/docs/general/install) to learn about other installation methods. {`version: "3" rpc: listen: tcp://127.0.0.1:6001 server: command: "php worker.php" temporal: address: "127.0.0.1:7233" logs: level: info `} }> Create a simple configuration file named `.rr.yaml` with the following content: Install the Temporal CLI using Homebrew: brew install temporal Download the Temporal CLI archive for your architecture: Windows amd64 Windows arm64 Extract it and add temporal.exe to your PATH. Download the Temporal CLI for your architecture: Linux amd64 Linux arm64 Extract the archive and move the temporal binary into your PATH, for example: sudo mv temporal /usr/local/bin }> ## Install Temporal CLI and start the development server The fastest way to get a development version of the Temporal Service running on your local machine is to use [Temporal CLI](https://docs.temporal.io/cli). Choose your operating system to install Temporal CLI: Add one more download action to the configuration file {``} The final configuration file should look like this: {` `} }> ### DLoad package manager Consider using DLoad to delegate all installation and updating processes to the package manager. After installing, open a new Terminal. Keep this running in the background: temporal server start-dev Change the Web UI port The Temporal Web UI may be on a different port in some examples or tutorials. To change the port for the Web UI, use the --ui-port option when starting the server: temporal server start-dev --ui-port 8080 The Temporal Web UI will now be available at http://localhost:8080. }> ## Start the development server Once you've installed Temporal CLI and added it to your PATH, open a new Terminal window and run the following command. This command starts a local Temporal Service. It starts the Web UI, creates the default Namespace, and uses an in-memory database. The Temporal Service will be available on localhost:7233. The Temporal Web UI will be available at http://localhost:8233. Leave the local Temporal Service running as you work through tutorials and other projects. You can stop the Temporal Service at any time by pressing CTRL+C. Once you have everything installed, you're ready to build apps with Temporal on your local machine. ## Run Hello World: Test Your Installation Now let's verify your setup is working by creating and running a complete Temporal application with both a Workflow and Activity. This test will confirm that: - The Temporal PHP SDK is properly installed - Your local Temporal Service is running - You can successfully create and execute Workflows and Activities - The communication between components is functioning correctly ### 1. Create the Activity Create an Activity file (`src/GreetingActivity.php`): ```php withStartToCloseTimeout(5), ); return yield $activity->greet($name); } } ``` ### 3. Create a Worker file Create a Worker file (`worker.php`, under project root directory): ```php newWorker(); // Register Workflows $worker->registerWorkflowTypes(\App\SayHelloWorkflow::class); // Register Activities $worker->registerActivity(\App\GreetingActivity::class); $factory->run(); ``` #### Run the Worker Previously, we created a Worker that executes Workflow and Activity tasks. Now, start the RoadRunner application server to run the Worker by opening up a new terminal window and running this command: ```bash ./rr serve ``` A Worker polls a Task Queue, that you configure it to poll, looking for work to do. Once the Worker dequeues a Workflow or Activity task from the Task Queue, it then executes the task. Workers are a crucial part of your Temporal application as they're what actually execute the tasks defined in your Workflows and Activities. For more information on Workers, see [Understanding Temporal](/evaluate/understanding-temporal#workers) and a [deep dive into Workers](/workers). ### 5. Execute the Workflow Now that your Worker is running, it's time to start a Workflow Execution. This final step will validate that everything is working correctly with your file labeled `client.php`. Create a separate file called `client.php`: ```php newWorkflowStub(\App\SayHelloWorkflow::class); $result = $workflowStub->sayHello('Temporal'); echo "Result: {$result}\n"; ``` While your Worker is still running, open a new terminal and run: ```bash php client.php ``` ### Verify Success If everything is working correctly, you should see: - Worker processing the workflow and activity - Output: `Result: Hello, Temporal!` - Workflow Execution details in the [Temporal Web UI](http://localhost:8233) Next: Run your first Temporal Application Create a basic Workflow and run it with the Temporal PHP SDK --- ## Side Effects - PHP SDK ## How to use Side Effects in PHP {#side-effects} Side Effects are used to execute non-deterministic code, such as generating a UUID or a random number, without compromising determinism in the Workflow. This is done by storing the results of the Side Effect into the Workflow [Event History](/workflow-execution/event#event-history). A Side Effect doesn't re-execute during a Replay. Instead, it returns the recorded result from the Workflow Execution Event History. Side Effects shouldn't fail. An exception that is thrown from the Side Effect causes failure and retry of the current Workflow Task. An Activity or a Local Activity can also be used instead of a Side Effect, as its results are also persisted in Workflow Execution History. :::note You shouldn't modify the Workflow state inside a Side Effect, because they're not re-executed during Replay. Side Effect functions should only return a value, and that value can be used in Workflow code to alter state. ::: To use a Side Effect in PHP, use the `Workflow::sideEffect()` function in your Workflow Definition to run non-deterministic code and return a value. ```php #[Workflow\WorkflowMethod] public function run() { $random = yield Workflow::sideEffect(fn() => random_int(0, 100)); if ($random < 50) { // ... } else { // ... } } ``` --- ## Temporal Client - PHP SDK This guide introduces Temporal Clients. It explains the role and use of Clients and shows you how to configure your PHP Client code to connect to the Temporal Service. The pages shows how to do the following: - [Connect to a local development Temporal Service](#connect-to-a-dev-cluster) - [Connect to Temporal Cloud](#connect-to-temporal-cloud) - [Start a Workflow Execution](#start-workflow-execution) - [Advanced connection options](#advanced-connection-options) ## How to connect a Temporal Client to a Temporal Service {#connect-to-a-dev-cluster} A [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) enables you to communicate with the [Temporal Service](/temporal-service). Communication with a Temporal Service includes, but isn't limited to, the following: - Scheduling Workflow Executions. - Starting Workflow Executions. - Sending Signals to Workflow Executions. - Sending Queries to Workflow Executions. - Sending Updates to Workflow Executions. - Getting the results of a Workflow Execution. - Providing an Activity Task Token. :::caution A Temporal Client cannot be initialized and used inside a Workflow. However, it is acceptable and common to use a Temporal Client inside an Activity to communicate with a Temporal Service. ::: When you are running a Temporal Service locally (such as the [Temporal CLI](https://docs.temporal.io/cli/server#start-dev)), the number of connection options you must provide is minimal. Many SDKs default to the local host or IP address and port that Temporalite and [Docker Compose](https://github.com/temporalio/docker-compose) serve (`127.0.0.1:7233`). In the PHP SDK, different client classes are responsible for different functional areas. The [`ServiceClient`](https://php.temporal.io/classes/Temporal-Client-GRPC-ServiceClient.html) is responsible for the low-level API and connection to the Temporal Service. It is also used in higher-level clients: [`WorkflowClient`](https://php.temporal.io/classes/Temporal-Client-WorkflowClient.html) and [`ScheduleClient`](https://php.temporal.io/classes/Temporal-Client-ScheduleClient.html). :::note RoadRunner is not required to work only with the client API; however, the [gRPC extension](https://pecl.php.net/package/grpc) is necessary. ::: Use `create()` factory methods to create clients. ```php use Temporal\Client\GRPC\ServiceClient; use Temporal\Client\WorkflowClient; $serviceClient = ServiceClient::create('localhost:7233'); $workflowClient = WorkflowClient::create($serviceClient); // Use $workflowClient to work with Workflows ... ``` See the [Advanced connection options](#advanced-connection-options) section for more information on configuring the connection. ## How to connect a Temporal Client to a Temporal Cloud {#connect-to-temporal-cloud} When you connect to [Temporal Cloud](/cloud), you need to provide additional connection and client options that include the following: - The [Temporal Cloud Namespace Id](/cloud/namespaces#temporal-cloud-namespace-id). - The [Namespace's gRPC endpoint](/cloud/namespaces#temporal-cloud-grpc-endpoint). An endpoint listing is available at the [Temporal Cloud Website](https://cloud.temporal.io/namespaces) on each Namespace detail page. The endpoint contains the Namespace Id and port. - mTLS CA certificate. - mTLS private key. For more information about managing and generating client certificates for Temporal Cloud, see [How to manage certificates in Temporal Cloud](/cloud/certificates). For more information about configuring TLS to secure inter- and intra-network communication for a Temporal Service, see [Temporal Customization Samples](https://github.com/temporalio/samples-server). Use the [`ServiceClient::createSSL()`](https://php.temporal.io/classes/Temporal-Client-GRPC-BaseClient.html#method_createSSL) method to configure a client connection to the Temporal Service. The `$clientKey` argument must be combined with the `$clientPem` to authenticate the Client. ```php use Temporal\Client\ClientOptions; use Temporal\Client\GRPC\ServiceClient; use Temporal\Client\WorkflowClient; $serviceClient = \Temporal\Client\GRPC\ServiceClient::createSSL( address: '.tmprl.cloud:7233', // crt: 'certs/server-root-ca-cert.pem', # ROOT CA to validate the server cert clientKey: 'certs/client-private-key.pem', clientPem: 'certs/client-cert.pem', // overrideServerName: 'tls-sample', ); $workflowClient = WorkflowClient::create( serviceClient: $serviceClient, options: (new ClientOptions()) ->withNamespace('.'), ); ``` To [run Worker processes](/develop/php/core-application#run-a-dev-worker) managed by Temporal Cloud, configure RoadRunner in the same way. ```yml temporal: # ... tls: # root_ca: 'certs/server-root-ca-cert.pem' key: 'certs/client-private-key.pem' cert: 'certs/client-cert.pem' client_auth_type: require_and_verify_client_cert # server_name: 'tls-sample' ``` To set up the [API key](/cloud/api-keys) in the Client, use the [`ServiceClient::withAuthKey()`](https://php.temporal.io/classes/Temporal-Client-GRPC-BaseClient.html#method_withAuthKey) method: ```php $serviceClient = \Temporal\Client\GRPC\ServiceClient::createSSL(/*...*/) ->withAuthKey('your-api-key'); ``` ## How to start a Workflow Execution {#start-workflow-execution} [Workflow Execution](/workflow-execution) semantics rely on several parameters—that is, to start a Workflow Execution you must supply a Task Queue that will be used for the Tasks (one that a Worker is polling), the Workflow Type, language-specific contextual data, and Workflow Function parameters. In the examples following all Workflow Executions are started using a Temporal Client. To spawn Workflow Executions from within another Workflow Execution, use either the [Child Workflow](/develop/php/child-workflows) or External Workflow APIs. See the [Customize Workflow Type](/develop/php/core-application#workflow-type) section to see how to customize the name of the Workflow Type. A request to spawn a Workflow Execution causes the Temporal Service to create the first Event ([WorkflowExecutionStarted](/references/events#workflowexecutionstarted)) in the Workflow Execution Event History. The Temporal Service then creates the first Workflow Task, resulting in the first [WorkflowTaskScheduled](/references/events#workflowtaskscheduled) Event. Use Workflow stub to start a Workflow Execution from within a Client. Workflow stub is a proxy generated by the [`WorkflowClient`](https://php.temporal.io/classes/Temporal-Client-WorkflowClient.html). You can use a typed or untyped Workflow stub in the client code. - Typed Workflow stubs are useful because they are type safe and allow you to invoke your Workflow methods such as `#[WorkflowMethod]`, `#[QueryMethod]`, `#[SignalMethod]`, and `#[UpdateMethod]` directly. - An untyped Workflow stub does not use a Workflow interface. It is more flexible because it has methods from the [`WorkflowStubInterface`](https://php.temporal.io/classes/Temporal-Client-WorkflowStubInterface.html), such as `start`, `signal`, `getResults`, `query`, `signal`, `update`, `cancel`, `terminate`, etc. When using untyped Workflow stub, we rely on the Workflow Type, Activity Type, Child Workflow Type, as well as Query and Signal names. For example, there is a Workflow defined as follows: ```php #[WorkflowInterface] interface AccountTransferWorkflowInterface { #[WorkflowMethod(name: "account.transfer")] public function begin(UuidInterface $transactionId); #[UpdateMethod(name: "pay")] public function move(UuidInterface $from, UuidInterface $to, int $amount); #[UpdateMethod(name: "finish")] public function commit(); #[UpdateMethod(name: "cancel")] public function rollback(string $reason); } ``` In case of a **typed** Workflow stub, you can use the `AccountTransferWorkflowInterface` to call the Workflow methods directly: ```php $stub = $workflowClient->newWorkflowStub(AccountTransferWorkflowInterface::class); $workflowClient->start($stub, $transactionId); $stub->move($from1, $to1, $amount1); $stub->move($from2, $to2, $amount2); $stub->commit(); ``` In case of an **untyped** Workflow stub, you need to specify Workflow Type and method names explicitly: ```php $stub = $workflowClient->newUntypedWorkflowStub('account.transfer'); $workflowClient->start($stub, $transactionId); $stub->update('pay', $from1, $to1, $amount1); $stub->update('pay', $from2, $to2, $amount2); $stub->update('finish'); ``` A Workflow Execution can be started either synchronously or asynchronously. **Synchronous start** A synchronous start initiates a Workflow and then waits for its completion. The started Workflow will not rely on the invocation process and will continue executing even if the waiting process crashes or stops. Be sure to acquire the Workflow interface or class name you want to start. For example: ```php #[WorkflowInterface] interface AccountTransferWorkflowInterface { #[WorkflowMethod(name: "MoneyTransfer")] #[ReturnType(UuidInterface::class)] public function transfer( string $fromAccountId, string $toAccountId, string $referenceId, int $amountCents); } ``` To start the Workflow in sync mode: ```php $accountTransfer = $workflowClient->newWorkflowStub( AccountTransferWorkflowInterface::class, ); $result = $accountTransfer->transfer('fromID', 'toID', 'refID', 1000); ``` **Asynchronous start** An asynchronous start initiates a Workflow Execution and immediately returns to the caller without waiting for a result. This is the most common way to start Workflows in a live environment. To start a Workflow asynchronously, pass the Workflow stub instance and start parameters into the [`WorkflowClient::start()`](https://php.temporal.io/classes/Temporal-Client-WorkflowClientInterface.html#method_start) method. ```php $accountTransfer = $workflowClient->newWorkflowStub( AccountTransferWorkflowInterface::class, ); $run = $this->workflowClient->start($accountTransfer, 'fromID', 'toID', 'refID', 1000); ``` After the Workflow is started, you can receive details about the Workflow Execution or result via the [`WorkflowRun`](https://php.temporal.io/classes/Temporal-Workflow-WorkflowRunInterface.html) object methods: ```php $run = $workflowClient->start($accountTransfer, 'fromID', 'toID', 'refID', 1000); // Get the Workflow ID var_dump($run->getExecution()->getID()); // Describe the Workflow Execution var_dump($run->describe()); // Wait for the Workflow to complete and get the result with 10-second timeout var_dump($run->getResult(timeout: 10)); ``` **Recurring start** You can start a Workflow Execution on a regular schedule with [the CronSchedule option](/develop/php/schedules#temporal-cron-jobs). ### How to set a Workflow's Task Queue {#set-task-queue} In most SDKs, the only Workflow Option that must be set is the name of the [Task Queue](/task-queue). When developing in PHP, the Task Queue name defaults to `"default"`. While setting a meaningful Task Queue name is recommended for better observability, Workflows can be run without setting this option. :::note PHP's default is different from most SDKs, which do require an explicit Task Queue name. ::: For your code to execute, a Worker Process must be running ([how to run Worker Processes](/develop/php/core-application#run-a-dev-worker)). This process needs a Worker Entity that is polling the same Task Queue name. Set the Workflow Task Queue with the Workflow stub in the Client code using [`WorkflowOptions::withTaskQueue()`](https://php.temporal.io/classes/Temporal-Client-WorkflowOptions.html#method_withTaskQueue). ```php $stub = $workflowClient->newWorkflowStub( YourWorkflowInterface::class, WorkflowOptions::new() ->withTaskQueue("Workflow-Task-Queue-1"), ); ``` ### How to set a Workflow Id {#workflow-id} Although it is not required, we recommend providing your own [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)that maps to a business process or business entity identifier, such as an order identifier or customer identifier. Set the Workflow Id with the Workflow stub in the Client code using [`WorkflowOptions::withTaskQueue()`](https://php.temporal.io/classes/Temporal-Client-WorkflowOptions.html#method_withWorkflowId). ```php $stub = $workflowClient->newWorkflowStub( YourWorkflowInterface::class, WorkflowOptions::new() ->withWorkflowId("Workflow-Id"), ); ``` ### How to get the results of a Workflow Execution {#get-workflow-results} If the call to start a Workflow Execution is successful, you will gain access to the Workflow Execution's Run Id. The Workflow Id, Run Id, and Namespace may be used to uniquely identify a Workflow Execution in the system and get its result. It's possible to both block progress on the result (synchronous execution) or get the result at some other point in time (asynchronous execution). In the Temporal Platform, it's also acceptable to use Queries as the preferred method for accessing the state and results of Workflow Executions. If you need to wait for the completion of a Workflow after an asynchronous start, make a blocking call to the `WorkflowRun::getResult()` method. ```php $stub = $workflowClient->newWorkflowStub(YourWorkflowInterface::class); $run = $workflowClient->start($stub, 'fromID', 'toID', 'refID', 1000); var_dump($run->getResult()); ``` In case of untyped Workflow stub, you can use the [`WorkflowStub::getResult()`](https://php.temporal.io/classes/Temporal-Workflow-WorkflowRunInterface.html#method_getResult) method: ```php $stub = $workflowClient->newUntypedWorkflowStub('account.transfer'); $workflowClient->start($stub, 'fromID', 'toID', 'refID', 1000); var_dump($stub->getResult(timeout: 5.5)); ``` Note that you can specify a timeout for the `getResult()` method in seconds. If the Workflow does not complete within the specified time, a `TimeoutException` will be thrown. See how to limit all RPC calls in the [RPC timeout](#configure-rpc-timeout) section. ## Advanced connection options {#advanced-connection-options} In PHP, it is common practice to work with resources in blocking mode. Long blocks can quickly exhaust the pool of available workers and lead to application failure. This section introduces features and configuration examples of the PHP SDK when working with the Temporal Client API. ### Connection gRPC connections in the PHP SDK are lazy by default, meaning they are not established until the first call. To force establishing the connection to the Temporal Service, you can call the [`ConnectionInterface::connect()`](https://php.temporal.io/classes/Temporal-Client-GRPC-Connection-ConnectionInterface.html#method_connect) or [`ServiceClient::getServerCapabilities()`](https://php.temporal.io/classes/Temporal-Client-GRPC-ServiceClientInterface.html#method_getServerCapabilities) method. ```php // ... $serviceClient->getConnection()->connect(timeout: 10); // or $serviceClient->getServerCapabilities(); ``` If, for some reason, the established connection is broken, the SDK will automatically attempt to restore it, taking into account the configured retry policy. ### Retry policy {#configure-rpc-retry-policy} Whenever the client fails to connect to the server, an error with a status code is generated. If the status code is `UNKNOWN`, `UNAVAILABLE`, or `RESOURCE_EXHAUSTED`, the client will make another connection attempt. By default, the number of attempts is unlimited, and the interval between them will range from 0.5 to 100 seconds with a backoff coefficient of 2. This means that the client will likely be blocked until it establishes a connection to the server through infinite attempts. If you want to change the default behavior, use the `withRetryPolicy()` method when creating a client service: ```php use Temporal\Client\Common\RpcRetryOptions; use Temporal\Client\GRPC\ServiceClient; use Temporal\Client\WorkflowClient; $serviceClient = ServiceClient::create('localhost:7233'); $workflowClient = WorkflowClient::create($serviceClient) ->withRetryOptions( RpcRetryOptions::new() ->withMaximumAttempts(10) ->withInitialInterval('1 second') // The first retry will be in 1 second ->withBackoffCoefficient(2.5) // Each next retry time will be multiplied by 2.5 ->withMaximumInterval('20 seconds') // The maximum interval between attempts ->withMaximumJitterCoefficient(0.2) // Actual retry time can be +/- 20% of the calculated time ); ``` ### RPC timeout {#configure-rpc-timeout} When the client calls the service's RPC, there is no default time limit for waiting for a response. This can result in the code call `$result = $workflowHandle->getResult();` blocking the PHP worker until the Workflow completes. In some cases, this is not the desired behavior, and there may be a need to set a reasonable timeout for waiting for the RPC to complete. Use the `withTimeout()` method to build a client with a timeout for all RPC calls. ```php use Temporal\Client\GRPC\ServiceClient; use Temporal\Client\WorkflowClient; $serviceClient = ServiceClient::create('localhost:7233'); $workflowClient = WorkflowClient::create($serviceClient) ->withTimeout(5.75); // Create a Workflow stub $stub = $workflowClient->newWorkflowStub(AccountTransferWorkflowInterface::class); // If the Workflow does not complete within 5.75 seconds, a TimeoutException will be thrown $result = $stub->transfer('fromID', 'toID', 'refID', 1000); ``` :::note The `withTimeout()` method is immutable. If you need to change the timeout for individual operations, create a new client from the existing one with a specific timeout: `$newClient = $workflowClient->withTimeout(0);` (`0` means no timeout). ::: --- ## Testing - PHP SDK The Testing section of the Temporal Application development guide describes the frameworks that facilitate Workflow and integration testing. In the context of Temporal, you can create these types of automated tests: - **End-to-end:** Running a Temporal Server and Worker with all its Workflows and Activities; starting and interacting with Workflows from a Client. - **Integration:** Anything between end-to-end and unit testing. - Running Activities with mocked Context and other SDK imports (and usually network requests). - Running Workers with mock Activities, and using a Client to start Workflows. - Running Workflows with mocked SDK imports. - **Unit:** Running a piece of Workflow or Activity code (a function or method) and mocking any code it calls. We generally recommend writing the majority of your tests as integration tests. Because the test server supports skipping time, use the test server for both end-to-end and integration tests with Workers. ## Testing Activities {#test-activities} An Activity can be tested with a mock Activity environment, which provides a way to mock the Activity context, listen to Heartbeats, and cancel the Activity. This behavior allows you to test the Activity in isolation by calling it directly, without needing to create a Worker to run the Activity. ## Testing Workflows {#test-workflows} ### How to mock Activities {#mock-activities} Mock the Activity invocation when unit testing your Workflows. When integration testing Workflows with a Worker, you can mock Activities by providing mock Activity implementations to the Worker. **RoadRunner config** To mock an Activity in PHP, use [RoadRunner Key-Value storage](https://github.com/spiral/roadrunner-kv) and add the following lines to your `tests/.rr.test.yaml` file. ```yaml --- # tests/.rr.test.yaml kv: test: driver: memory config: interval: 10 ``` If you want to be able to mock Activities, use `WorkerFactory` from the `Temporal\Testing` Namespace in your PHP Worker: ```php // worker.test.php use Temporal\Testing\WorkerFactory; $factory = WorkerFactory::create(); $worker = $factory->newWorker(); $worker->registerWorkflowTypes(MyWorkflow::class); $worker->registerActivity(MyActivity::class); $factory->run(); ``` Then, in your tests to mock an Activity, use the`ActivityMocker` class. Assume we have the following Activity: ```php #[ActivityInterface(prefix: "SimpleActivity.")] interface SimpleActivityInterface { #[ActivityMethod('doSomething')] public function doSomething(string $input): string; ``` To mock it in the test, you can do this: ```php final class SimpleWorkflowTestCase extends TestCase { private WorkflowClient $workflowClient; private ActivityMocker $activityMocks; protected function setUp(): void { $this->workflowClient = new WorkflowClient(ServiceClient::create('localhost:7233')); $this->activityMocks = new ActivityMocker(); parent::setUp(); } protected function tearDown(): void { $this->activityMocks->clear(); parent::tearDown(); } public function testWorkflowReturnsUpperCasedInput(): void { $this->activityMocks->expectCompletion('SimpleActivity.doSomething', 'world'); $workflow = $this->workflowClient->newWorkflowStub(SimpleWorkflow::class); $run = $this->workflowClient->start($workflow, 'hello'); $this->assertSame('world', $run->getResult('string')); } } ``` In the preceding test case, we do the following: 1. Instantiate `ActivityMocker` in the `setUp()` method of the test. 2. Clear the cache after each test in `tearDown()`. 3. Mock an Activity call to return a string `world`. To mock a failure, use the `expectFailure()` method: ```php $this->activityMocks->expectFailure('SimpleActivity.echo', new \LogicException('something went wrong')); ``` ### How to skip time {#skip-time} Some long-running Workflows can persist for months or even years. Implementing the test framework allows your Workflow code to skip time and complete your tests in seconds rather than the Workflow's specified amount. For example, if you have a Workflow sleep for a day, or have an Activity failure with a long retry interval, you don't need to wait the entire length of the sleep period to test whether the sleep function works. Instead, test the logic that happens after the sleep by skipping forward in time and complete your tests in a timely manner. The test framework included in most SDKs is an in-memory implementation of Temporal Server that supports skipping time. Time is a global property of an instance of `TestWorkflowEnvironment`: skipping time (either automatically or manually) applies to all currently running tests. If you need different time behaviors for different tests, run your tests in a series or with separate instances of the test server. For example, you could run all tests with automatic time skipping in parallel, and then all tests with manual time skipping in series, and then all tests without time skipping in parallel. #### Set up time skipping {#setting-up} Set up the time-skipping test framework in the SDK of your choice. 1. In the `tests` folder, create `bootstrap.php` with the following contents: ```php declare(strict_types=1); require __DIR__ . '/../vendor/autoload.php'; use Temporal\Testing\Environment; $environment = Environment::create(); $environment->start(); register_shutdown_function(fn () => $environment->stop()); ``` If you don't want to run the test server with all of your tests, you can add a condition to start a test only if the `RUN_TEMPORAL_TEST_SERVER` environment variable is present: ```php if (getenv('RUN_TEMPORAL_TEST_SERVER') !== false) { $environment = Environment::create(); $environment->start('./rr serve -c .rr.silent.yaml --workflow-id tests'); register_shutdown_function(fn() => $environment->stop()); } ``` 2. Add `bootstrap.php` and the `TEMPORAL_ADDRESS` environment variable to `phpunit.xml`: ```xml ``` 3. Add the test server executable to `.gitignore`: ```gitignore temporal-test-server ``` ## How to Replay a Workflow Execution {#replay} Replay recreates the exact state of a Workflow Execution. You can replay a Workflow from the beginning of its Event History. Replay succeeds only if the [Workflow Definition](/workflow-definition) is compatible with the provided history from a deterministic point of view. When you test changes to your Workflow Definitions, we recommend doing the following as part of your CI checks: 1. Determine which Workflow Types or Task Queues (or both) will be targeted by the Worker code under test. 2. Download the Event Histories of a representative set of recent open and closed Workflows from each Task Queue, either programmatically using the SDK client or via the Temporal CLI. 3. Run the Event Histories through replay. 4. Fail CI if any error is encountered during replay. The following are examples of fetching and replaying Event Histories: To replay Workflow Executions, use the `\Temporal\Testing\Replay\WorkflowReplayer` class. In the following example, Event Histories are fetching from the Temporal, and then replayed. If the Workflow is non-deterministic, a `NonDeterministicWorkflowException` will be thrown. Note that this requires [Advanced Visibility](/visibility#advanced-visibility) to be enabled. ```php /** * We assume you already have a WorkflowClient and WorkflowReplayer in scope. * @var \Temporal\Client\WorkflowClientInterface $workflowClient * @var \Temporal\Testing\Replay\WorkflowReplayer $replayer */ // Find all workflow executions of type "MyWorkflow" and task queue "MyTaskQueue". $executions = $workflowClient->listWorkflowExecutions( "WorkflowType='MyWorkflow' AND TaskQueue='MyTaskQueue'" ); // Replay each workflow execution. foreach ($executions as $executionInfo) { try { $replayer->replayFromServer( workflowType: $executionInfo->type->name, execution: $executionInfo->execution, ); } catch (\Temporal\Testing\Replay\Exception\ReplayerException $e) { // Handle a replay error. } } ``` In the next example, an Event History is loaded from a JSON file, and the maximum number of replayed Events is limited to 42. ```php $replayer->replayFromJSON( workflowType: 'MyWorkflow', path: 'history.json', lastEventId: 42, // optional ); ``` You can download a Workflow History using PHP, and then replay it from a memorized History object: ```php $history = $this->workflowClient->getWorkflowHistory( execution: $run->getExecution(), )->getHistory(); (new WorkflowReplayer())->replayHistory($history); ``` --- ## Durable Timers - PHP SDK ## What is a Timer? {#timers} A Workflow can set a durable timer for a fixed time period. In some SDKs, the function is called `sleep()`, and in others, it's called `timer()`. A Workflow can sleep for months. Timers are persisted, so even if your Worker or Temporal Service is down when the time period completes, as soon as your Worker and Temporal Service are back up, the `sleep()` call will resolve and your code will continue executing. Sleeping is a resource-light operation: it does not tie up the process, and you can run millions of Timers off a single Worker. To set a Timer in PHP, use `Workflow::timer()` and pass the number of seconds you want to wait before continuing. The following example yields a sleep method for 5 minutes. ```php yield Workflow::timer(300); // sleep for 5 minutes ``` You cannot set a Timer invocation inside the `await` or `awaitWithTimeout` methods. --- ## Versioning - PHP SDK feature guide Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: - [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). The Worker Versioning feature allows you to tag your Workers and programmatically roll them out in versioned deployments, so that old Workers can run old code paths and new Workers can run new code paths. - [Versioning with Patching](#php-sdk-patching-api). This method works by adding branches to your code tied to specific revisions. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. ## Worker Versioning Temporal's [Worker Versioning](/production-deployment/worker-deployments/worker-versioning) feature allows you to tag your Workers and programmatically roll them out in Deployment Versions, so that old Workers can run old code paths and new Workers can run new code paths. This way, you can pin your Workflows to specific revisions, avoiding the need for patching. ## Versioning with Patching {#php-sdk-patching-api} ### Patching with GetVersion A Patch defines a logical branch in a Workflow for a specific change, similar to a feature flag. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. When you want to make substantive code changes that may affect existing Workflow Executions, create a patch. Note that there's no need to patch [Pinned Workflows](/worker-versioning). Suppose you have an initial Workflow that runs `prePatchActivity`: ```php #[WorkflowInterface] class MyWorkflow { private $activity; public function __construct() { $this->activity = Workflow::newActivityStub( YourActivityInterface::class, ActivityOptions::new()->withScheduleToStartTimeout(60) ); } #[WorkflowMethod] public function runAsync() { $result = yield $this->activity->prePatchActivity(); } } ``` Suppose you replaced `prePatchActivity` with `postPatchActivity` and deployed the updated code. If an existing Workflow Execution was started by the original version of the Workflow code, where `prePatchActivity` was run, and then resumed running on a new Worker where it was replaced with `postPatchActivity`, the server side Event History would be out of sync. This would cause the Workflow to fail with a nondeterminism error. To resolve this, you can use [Workflow::getVersion](https://php.temporal.io/classes/Temporal-Workflow.html#method_getVersion) to patch to your Workflow: ```php #[WorkflowInterface] class MyWorkflow { // ... #[WorkflowMethod] public function runAsync() { $version = yield Workflow::getVersion('Step 1', Workflow::DEFAULT_VERSION, 1); $result = $version === Workflow::DEFAULT_VERSION ? yield $this->activity->prePatchActivity() : yield $this->activity->postPatchActivity(); } } ``` When `getVersion()` is run for the new Workflow Execution, it records a marker in the Event History so that all future calls to `getVersion()` for this change Id — `Step 1` in the example — on this Workflow Execution will always return the given version number, which is `1` in the example. If you make an additional change, such as adding `anotherPatchActivity()`, you need to add some additional code: ```php #[WorkflowInterface] class MyWorkflow { // ... #[WorkflowMethod] public function runAsync() { $version = yield Workflow::getVersion('Step 1', Workflow::DEFAULT_VERSION, maxSupported: 2); $result = match($version) { Workflow::DEFAULT_VERSION => yield $this->activity->prePatchActivity() 1 => yield $this->activity->postPatchActivity(); 2 => yield $this->activity->anotherPatchActivity(); }; } } ``` Note that we changed `maxSupported` from 1 to 2. A Workflow that has already passed this `getVersion()` call before it was introduced returns `DEFAULT_VERSION`. A Workflow that was run with `maxSupported` set to 1 returns 1. New Workflows return 2. After all the Workflow Executions prior to version 1 have left retention, you can remove the code for that version: ```php #[WorkflowMethod] public function runAsync() { $version = yield Workflow::getVersion('Step 1', minSupported: 1, maxSupported: 2); $result = match($version) { 1 => yield $this->activity->postPatchActivity(); 2 => yield $this->activity->anotherPatchActivity(); }; } ``` You'll note that `minSupported` has changed from `DEFAULT_VERSION` to `1`. If an older version of the Workflow Execution history is replayed on this code, it fails because the minimum expected version is 1. After all the Workflow Executions for version 1 have left retention, you can remove version 1 so that your code looks like the following: ```php #[WorkflowMethod] public function runAsync() { $version = yield Workflow::getVersion('Step 1', minSupported: 2, maxSupported: 2); $result = yield $this->activity->anotherPatchActivity(); } ``` Patching allows you to make changes to currently running Workflows. It is a powerful method for introducing compatible changes without introducing non-determinism errors. ### Workflow cutovers To understand why Patching is useful, it's helpful to demonstrate cutting over an entire Workflow. Since incompatible changes only affect open Workflow Executions of the same type, you can avoid determinism errors by creating a whole new Workflow when making changes. To do this, you can copy the Workflow Definition function, giving it a different name, and register both names with your Workers. For example, you would duplicate `MyWorkflow` as `MyWorkflowV2`: ```php #[WorkflowInterface] class MyWorkflow {} #[WorkflowInterface] class MyWorkflowV2 {} ``` You would then need to update the Worker configuration, and any other identifier strings, to register both Workflow Types. The downside of this method is that it requires you to duplicate code and to update any commands used to start the Workflow. This can become impractical over time. This method also does not provide a way to version any still-running Workflows -- it is essentially just a cutover, unlike Patching. ## Runtime checking {#runtime-checking} The Temporal PHP SDK performs a runtime check to help prevent obvious incompatible changes. Adding, removing, or reordering any of these methods without Versioning triggers the runtime check and results in a nondeterminism error: - `workflow.ExecuteActivity()` - `workflow.ExecuteChildWorkflow()` - `workflow.NewTimer()` - `workflow.RequestCancelWorkflow()` - `workflow.SideEffect()` - `workflow.SignalExternalWorkflow()` - `workflow.Sleep()` The runtime check does not perform a thorough check. For example, it does not check on the Activity's input arguments or the Timer duration. Each Temporal SDK implements these sanity checks differently, and they are not a complete check for non-deterministic changes. Instead, you should incorporate [Replay Testing](/develop/php/testing-suite#replay) when making revisions. --- ## Plugins Guide --- # Plugins A **Plugin** is an abstraction that allows you to customize any aspect of your Temporal Worker setup, including registering Workflow and Activity definitions, modifying worker and client options, and more. Using plugins, you can build reusable open-source libraries or build add-ons for engineers at your company. This guide will teach you how to create plugins and give platform engineers general guidance on using and managing Temporal's primitives. Here are some common use cases for plugins: - AI Agent SDKs - Observability, tracing, or logging middleware - Adding reliable built-in functionality such as LLM calls, messaging systems, and payments infrastructure - Encryption or compliance middleware ## How to build a Plugin The recommended way to start building plugins is with a `SimplePlugin`. This abstraction will tackle the vast majority of plugins people want to write. For advanced use cases, you can extend the methods in lower-level classes that Simple Plugin is based on without re-implementing what you’ve done. See the [Advanced Topics section](#advanced-topics-for-plugins) for more information. ### Example Plugins If you prefer to learn by getting hands-on with code, check out some existing plugins. - Temporal's Python SDK ships with an [OpenAI Agents SDK](https://github.com/temporalio/sdk-python/tree/main/temporalio/contrib/openai_agents) plugin - [Temporal client and Worker plugin for Pydantic AI](https://github.com/pydantic/pydantic-ai/blob/757d40932ebb8ef00f25cc469ff44e9b267b1aa3/pydantic_ai_slim/pydantic_ai/durable_exec/temporal/__init__.py#L83) ## What you can provide to users in a plugin There are a number of features you can give your users with a plugin. Here's a short list of some of the things you can do. - [Built-in Activities](#built-in-activity) - [Workflow-friendly libraries](#workflow-friendly-libraries) - [Built-in Workflows](#built-in-workflows) - [Built-in Nexus Operations](#built-in-nexus-operations) - [Custom Data Converters](#custom-data-converters) - [Interceptors](#interceptors) ### Built-in Activity You can provide built-in Activities in a Plugin for users to call from their Workflows. Check out the [Activities doc](/activities) for more detail on how these work. You should refer to the [best practices for creating Activities](/activity-definition#best-practices-for-defining-activities) when you are making Activity plugins. #### Timeouts and retry policies Temporal's Activity retry mechanism gives applications the benefits of durable execution. See the [Activity retry policy explanation](/activity-definition#activity-retry-policy) for more details. Here is an example with Python: ```python @activity.defn async def some_activity() -> None: return None plugin = SimplePlugin( activities = [some_activity] ) ``` ### Workflow-friendly libraries You can provide a library with functionality for use within a Workflow if you'd like to abstract away some Temporal-specific details for your users. Your library will call elements you include in your Plugin such as Activities, Child Workflows, Signals, Updates, Queries, Nexus Operations, Interceptors, Data Converters, and any other code as long as it follows these requirements: - It should be [deterministic](/workflow-definition#deterministic-constraints), running the same way every time it’s executed. Non-deterministic code should go in Activities or Nexus Operations. - See [observability](/evaluate/development-production-features/observability) to avoid duplicating observation side effects when Workflows replay. - Put other side effects inside of Activities or [Local Activities](/local-activity). This helps your Workflow handle being restarted, resumed, or executed in a different process from where it originally began without losing correctness or state consistency. - See [testing your Plugin](#testing-your-plugin) to write tests that check for issues with side effects. - It should run quickly since it may be replayed many times during a long Workflow execution. More expensive code should go in Activities or Nexus Operations. A Plugin should allow a user to decompose their Workflows into Activities, as well as Child Workflows and Nexus Calls when needed. This gives users granular control through retries and timeouts, debuggability through the Temporal UI, operability with resets, pauses, and cancels, memoization for efficiency and resumability, and scalability using task queues and Workers. Users use Workflows for: - Orchestration and decision-making - Interactivity via [message-passing](/evaluate/development-production-features/workflow-message-passing) - Tracing and observability #### Making changes to your library Your users may want to keep their Workflows running across deployments of their Worker code. If their deployment includes a new version of your Plugin, changes to your Plugin could break Workflow code that started before the new version was deployed. This can be due to [non-deterministic behavior from code changes](/workflow-definition#non-deterministic-change) in your Plugin. See [testing](#testing-your-plugin) to see how to test for this. And, if you make substantive changes, you need to use [patching](/patching). #### Example of a Workflow library that uses a Plugin in Python - [Implementation of the `OpenAIAgentsPlugin`](https://github.com/temporalio/sdk-python/tree/main/temporalio/contrib/openai_agents) - [Example of replay testing](https://github.com/temporalio/sdk-python/blob/main/tests/contrib/openai_agents/test_openai_replay.py) ### Built-in Workflows You can provide a built-in Workflow in a `SimplePlugin`. It’s callable as a Child Workflow or standalone. When you want to provide a piece of functionality that's more complex than an Activity, you can: - Use a [Workflow Library](#workflow-friendly-libraries) that runs directly in the end user’s Workflow - Add a Child Workflow Consider adding a Child Workflow when one or more of these conditions applies: - That child should outlive the parent. - The Workflow Event History would otherwise [not scale](/workflow-execution/event#event-history-limits) in parent Workflows. - When you want a separate Workflow ID for the child so that it can be operated independently of the parent's state (canceled, terminated, paused). Any Workflow can be run as a standalone Workflow or as a Child Workflow, so registering a Child Workflow in a `SimplePlugin` is the same as registering any Workflow. Here is an example with Python: ```python @workflow.defn class HelloWorkflow: @workflow.run async def run(self, name: str) -> str: return f"Hello, {name}!" plugin = SimplePlugin( workflows = [HelloWorkflow] ) ... client = await Client.connect( "localhost:7233", plugins=[ plugin, ], ) async with Worker( client, task_queue="task-queue", ): client.execute_workflow( HelloWorkflow.run, "Tim", task_queue=worker.task_queue, ) ``` ### Built-in Nexus Operations Nexus calls are used from Workflows similar to Activities and you can check out some common [Nexus Use Cases](/nexus/use-cases). Like Activities, Nexus Call arguments and return values must be serializable. Here's an example of how to register Nexus handlers in Workflows with Python: ```python @nexusrpc.service class WeatherService: get_weather_nexus_operation: nexusrpc.Operation[WeatherInput, Weather] @nexusrpc.handler.service_handler(service=WeatherService) class WeatherServiceHandler: @nexusrpc.handler.sync_operation async def get_weather_nexus_operation( self, ctx: nexusrpc.handler.StartOperationContext, input: WeatherInput ) -> Weather: return Weather( city=input.city, temperature_range="14-20C", conditions="Sunny with wind." ) plugin = SimplePlugin( nexus_service_handlers = [WeatherServiceHandler()] ) ``` ### Custom Data Converters A [custom Data Converter](/default-custom-data-converters#custom-data-converter) can alter data formats or provide compression or encryption. Note that you can use an existing Data Converter such as, in Python, `PydanticPayloadConverter` in your Plugin. Here's an example of how to add a Custom Data Converter to a Plugin with Python: ```python def add_converter(converter: Optional[DataConverter]) -> DataConverter if converter is None or converter == temporalio.converter.DataConverter.default return pydantic_data_converter # Should consider interactions with other plugins, # as this will override the data converter. # This may mean failing, warning, or something else return converter plugin = SimplePlugin( data_converter = add_converter ) ``` ### Interceptors Interceptors are middleware that can run before and after various calls such as Activities, Workflows, and Signals. You can [learn more about interceptors](/develop/python/interceptors) for the details of implementing them. They're used to: - Create side effects such as logging and tracing. - Modify arguments, such as adding headers for authorization or tracing propagation. Here's an example of how to add one to a Plugin with Python: ```python class SomeWorkerInterceptor( temporalio.worker.Interceptor ): pass # Your implementation plugin = SimplePlugin( worker_interceptors = [SomeWorkerInterceptor()] ) ``` ### Special considerations for different languages Each of the SDKs has nuances you should be aware of so you can account for it in your code. For example, you can choose to [run your Workflows in a sandbox in Python](/develop/python/python-sdk-sandbox). This lets you run Workflow code in a sandbox environment to help prevent non-determinism errors in your application. To work for users who use sandboxing, your Plugin should specify the Workflow runner that it uses. Here's an example of how to explicitly define the Workflow runner for your Plugin with Python: ```python def workflow_runner(runner: Optional[WorkflowRunner]) -> WorkflowRunner: if not runner: raise ValueError("No WorkflowRunner provided to the OpenAI plugin.") # If in sandbox, add additional passthrough if isinstance(runner, SandboxedWorkflowRunner): return dataclasses.replace( runner, restrictions=runner.restrictions.with_passthrough_modules("mcp"), ) return runner SimplePlugin(..., workflow_runner=workflow_runner) ``` ## Testing your Plugin {#testing-your-plugin} To test your Plugin, you'll write a normal Temporal Workflow tests, having included the plugin in your Client. Two special concerns are versioning tests, for when you're making changes to your plugin, and testing unwanted side effects. ### Versioning tests When you make changes to your plugin after it has already shipped to users, it's recommended that you set up [replay testing](/develop/python/testing-suite#replay) on each important change to make sure that you’re not causing non-determinism errors for your users. ### Side effects tests Your Plugin should cater to Workflows resuming in different processes than the ones they started on and then replaying from the beginning, which can happen, for example, after an intermittent failure. You can ensure you're not depending on local side effects by turning Workflow caching off, which will mean that the Workflow replays from the top each time it progresses. Here's an example with Python: ```python client = await Client.connect( "localhost:7233", plugins=[ my_module.my_plugin, ], ) async with Worker( client, task_queue="task-queue", max_cached_workflows=0 ): # Start a workflow ... ``` Check for duplicate side effects or other types of failures. It's harder to test against side effects to global variables, so this practice is best avoided entirely. ## Advanced Topics for Plugins If you go deeper into `SimplePlugin`, you'll see it aggregates a pair of raw Plugin classes that you can use for a higher level of flexibility: a Worker Plugin and a client Plugin. - Worker Plugins contain functionality that runs inside your users’ Workflows. - Client Plugins contain functionality that runs when Workflows are created and return results. If your Plugin implements both of them, registering it in the client will also register it in Workers created with that client. ### Client Plugin Client Plugins are provided to the Temporal client on creation. They can change client configurations and service client configurations. `ClientConfig` contains settings like client Interceptors and DataConverters. `ConnectConfig` configures the actual network connections to the local or cloud Temporal server with values like an API key. This is the basic implementation of a client Plugin using Python: ```python class MyAdvancedClientPlugin(temporalio.client.Plugin): def configure_client(self, config: ClientConfig) -> ClientConfig: return config async def connect_service_client( self, config: ConnectConfig, next: Callable[[ConnectConfig], Awaitable[ServiceClient]], ) -> temporalio.service.ServiceClient: return await next(config) ``` The primary use case for integrations so far is setting a `DataConverter`, like in the [Data Converter example](#custom-data-converters). ### Worker Plugin Worker Plugins are provided at Worker creation and have more capabilities and corresponding implementation than client Plugins. They can change Worker configurations, run code during the Worker lifetime, and manage the Replayer in a similar way. You can learn more about the [Replayer](#replayer) in a later section. Similar to `configure_client` above, you implement `configure_worker` and `configure_replayer` to change any necessary configurations. In addition, `run_worker` allows you to execute code before and after the Worker runs. This can be used to set up resources or globals for use during the Worker execution. `run_replayer` does the same for the Replayer, but keep in mind that the Replayer has a more complex return type. This is a basic implementation of a Worker plugin using Python: ```python class MyAdvancedWorkerPlugin(temporalio.worker.Plugin): def configure_worker(self, config: WorkerConfig) -> WorkerConfig: return config async def run_worker( self, worker: Worker, next: Callable[[Worker], Awaitable[None]] ) -> None: next(worker) def configure_replayer(self, config: ReplayerConfig) -> ReplayerConfig: return config def run_replayer( self, replayer: Replayer, histories: AsyncIterator[temporalio.client.WorkflowHistory], next: Callable[ [Replayer, AsyncIterator[WorkflowHistory]], AbstractAsyncContextManager[AsyncIterator[WorkflowReplayResult]], ], ) -> AbstractAsyncContextManager[AsyncIterator[WorkflowReplayResult]]: return next(replayer, histories) ``` ### Replayer The Replayer allows Workflow authors to validate that their Workflows will work after changes to either the Workflow or a library they depend on. It’s normally used in test runs or when testing Workers before they roll out in production. The Replayer runs on a Workflow History created by a previous Workflow run. Suppose something in the Workflow or underlying code has changed in a way which could potentially cause a non-determinism error. In that case, the Replayer will notice the change in the way it runs compared to the history provided. The Replayer is typically configured identically to the Worker and client. Ff you’re using `SimplePlugin`, this is already handled for you. If you need to do something custom for the Replayer, you can configure it directly. Here's an example of how to do that with Python: ```python class MyAdvancedWorkerPlugin(temporalio.worker.Plugin): def configure_replayer(self, config: ReplayerConfig) -> ReplayerConfig: return config def run_replayer( self, replayer: Replayer, histories: AsyncIterator[temporalio.client.WorkflowHistory], next: Callable[ [Replayer, AsyncIterator[WorkflowHistory]], AbstractAsyncContextManager[AsyncIterator[WorkflowReplayResult]], ], ) -> AbstractAsyncContextManager[AsyncIterator[WorkflowReplayResult]]: return next(replayer, histories) ``` --- ## Asynchronous Activity Completion - Python SDK **How to Asynchronously complete an Activity using the Temporal Python SDK.** [Asynchronous Activity Completion](/activity-execution#asynchronous-activity-completion) enables the Activity Function to return without the Activity Execution completing. There are three steps to follow: 1. The Activity provides the external system with identifying information needed to complete the Activity Execution. Identifying information can be a [Task Token](/activity-execution#task-token), or a combination of Namespace, Workflow Id, and Activity Id. 2. The Activity Function completes in a way that identifies it as waiting to be completed by an external system. 3. The Temporal Client is used to Heartbeat and complete the Activity. To mark an Activity as completing asynchronously, do the following inside the Activity. ```python --- # Capture token for later completion captured_token = activity.info().task_token activity.raise_complete_async() ``` To update an Activity outside the Activity, use the [get_async_activity_handle()](https://python.temporal.io/temporalio.client.Client.html#get_async_activity_handle) method to get the handle of the Activity. ```python handle = my_client.get_async_activity_handle(task_token=captured_token) ``` Then, on that handle, you can call the results of the Activity, `heartbeat`, `complete`, `fail`, or `report_cancellation` method to update the Activity. ```python await handle.complete("Completion value.") ``` --- ## Benign exceptions - Python SDK **How to mark an Activity error as benign using the Temporal Python SDK** When Activities throw errors that are expected or not severe, they can create noise in your logs, metrics, and OpenTelemetry traces, making it harder to identify real issues. By marking these errors as benign, you can exclude them from your observability data while still handling them in your Workflow logic. To mark an error as benign, set the `category` parameter to `ApplicationErrorCategory.BENIGN` when raising an [`ApplicationError`](https://python.temporal.io/temporalio.exceptions.ApplicationError.html). Benign errors: - Have Activity failure logs downgraded to DEBUG level - Do not emit Activity failure metrics - Do not set the OpenTelemetry failure status to ERROR ```python from temporalio import activity from temporalio.exceptions import ApplicationError, ApplicationErrorCategory @activity.defn async def my_activity() -> str: try: return await call_external_service() except Exception as err: raise ApplicationError( message=str(err), # Mark this error as benign since it's expected category=ApplicationErrorCategory.BENIGN, ) ``` Use benign exceptions for Activity errors that occur regularly as part of normal operations, such as polling an external service that isn't ready yet, or handling expected transient failures that will be retried. --- ## Interrupt a Workflow Execution - Python SDK You can interrupt a Workflow Execution in one of the following ways: - [Cancel](#cancellation): Canceling a Workflow provides a graceful way to stop Workflow Execution. - [Terminate](#termination): Terminating a Workflow forcefully stops Workflow Execution. Terminating a Workflow forcefully stops Workflow Execution. This action resembles killing a process. - The system records a `WorkflowExecutionTerminated` event in the Workflow History. - The termination forcefully and immediately stops the Workflow Execution. - The Workflow code gets no chance to handle termination. - A Workflow Task doesn't get scheduled. In most cases, canceling is preferable because it allows the Workflow to finish gracefully. Terminate only if the Workflow is stuck and cannot be canceled normally. ## Cancel a Workflow Execution {#cancellation} Canceling a Workflow provides a graceful way to stop Workflow Execution. This action resembles sending a `SIGTERM` to a process. - The system records a `WorkflowExecutionCancelRequested` event in the Workflow History. - A Workflow Task gets scheduled to process the cancelation. - The Workflow code can handle the cancelation and execute any cleanup logic. - The system doesn't forcefully stop the Workflow. To cancel a Workflow Execution in Python, use the [cancel()](https://python.temporal.io/temporalio.client.WorkflowHandle.html#cancel) function on the Workflow handle. ```python await client.get_workflow_handle("your_workflow_id").cancel() ``` ### Cancel an Activity from a Workflow {#cancel-activity} Canceling an Activity from within a Workflow requires that the Activity Execution sends Heartbeats and sets a Heartbeat Timeout. If the Heartbeat is not invoked, the Activity cannot receive a cancellation request. When any non-immediate Activity is executed, the Activity Execution should send Heartbeats and set a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) to ensure that the server knows it is still working. When an Activity is canceled, an error is raised in the Activity at the next available opportunity. If cleanup logic needs to be performed, it can be done in a `finally` clause or inside a caught cancel error. However, for the Activity to appear canceled the exception needs to be re-raised. :::note Unlike regular Activities, [Local Activities](/local-activity) can be canceled if they don't send Heartbeats. Local Activities are handled locally, and all the information needed to handle the cancellation logic is available in the same Worker process. ::: To cancel an Activity from a Workflow Execution, call the [cancel()](https://docs.python.org/3/library/asyncio-task.html#asyncio.Task.cancel) method on the Activity handle that is returned from [start_activity()](https://python.temporal.io/temporalio.workflow.html#start_activity). ```python @activity.defn async def cancellable_activity(input: ComposeArgsInput) -> NoReturn: try: while True: print("Heartbeating cancel activity") await asyncio.sleep(0.5) activity.heartbeat("some details") except asyncio.CancelledError: print("Activity cancelled") raise @activity.defn async def run_activity(input: ComposeArgsInput): print("Executing activity") return input.arg1 + input.arg2 @workflow.defn class GreetingWorkflow: @workflow.run async def run(self, input: ComposeArgsInput) -> None: activity_handle = workflow.start_activity( cancellable_activity, ComposeArgsInput(input.arg1, input.arg2), start_to_close_timeout=timedelta(minutes=5), heartbeat_timeout=timedelta(seconds=30), ) await asyncio.sleep(3) activity_handle.cancel() ``` :::note The Activity handle is a Python task. By calling `cancel()`, you're essentially requesting the task to be canceled. ::: ## Terminate a Workflow Execution {#termination} Terminating a Workflow forcefully stops Workflow Execution. This action resembles killing a process. - The system records a `WorkflowExecutionTerminated` event in the Workflow History. - The termination forcefully and immediately stops the Workflow Execution. - The Workflow code gets no chance to handle termination. - A Workflow Task doesn't get scheduled. To terminate a Workflow Execution in Python, use the [terminate()](https://python.temporal.io/temporalio.client.WorkflowHandle.html#terminate) function on the Workflow handle. ```python await client.get_workflow_handle("your_workflow_id").terminate() ``` ## Reset a Workflow Execution {#reset} Resetting a Workflow Execution terminates the current Workflow Execution and starts a new Workflow Execution from a point you specify in its Event History. Use reset when a Workflow is blocked due to a non-deterministic error or other issues that prevent it from completing. When you reset a Workflow, the Event History up to the reset point is copied to the new Workflow Execution, and the Workflow resumes from that point with the current code. Reset only works if you've fixed the underlying issue, such as removing non-deterministic code. Any progress made after the reset point will be discarded. Provide a reason when resetting, as it will be recorded in the Event History. 1. Navigate to the Workflow Execution details page, 2. Click the **Reset** button in the top right dropdown menu, 3. Select the Event ID to reset to, 4. Provide a reason for the reset, 5. Confirm the reset. The Web UI shows available reset points and creates a link to the new Workflow Execution after the reset completes. Use the `temporal workflow reset` command to reset a Workflow Execution: ```bash temporal workflow reset \ --workflow-id \ --event-id \ --reason "Reason for reset" ``` For example: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" ``` By default, the command resets the latest Workflow Execution in the `default` Namespace. Use `--run-id` to reset a specific run. Use `--namespace` to specify a different Namespace: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" \ --namespace my-namespace \ --tls-cert-path /path/to/cert.pem \ --tls-key-path /path/to/key.pem ``` Monitor the new Workflow Execution after resetting to ensure it completes successfully. --- ## Child Workflows - Python SDK This page shows how to do the following: - [Start a Child Workflow Execution](#child-workflows) - [Set a Parent Close Policy](#parent-close-policy) ## Start a Child Workflow Execution {#child-workflows} **How to start a Child Workflow Execution using the Temporal Python SDK.** A [Child Workflow Execution](/child-workflows) is a Workflow Execution that is scheduled from within another Workflow using a Child Workflow API. When using a Child Workflow API, Child Workflow related Events ([StartChildWorkflowExecutionInitiated](/references/events#startchildworkflowexecutioninitiated), [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted), [ChildWorkflowExecutionCompleted](/references/events#childworkflowexecutioncompleted), etc...) are logged in the Workflow Execution Event History. Always block progress until the [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted) Event is logged to the Event History to ensure the Child Workflow Execution has started. After that, Child Workflow Executions may be abandoned using the _Abandon_ [Parent Close Policy](/parent-close-policy) set in the Child Workflow Options. To be sure that the Child Workflow Execution has started, first call the Child Workflow Execution method on the instance of Child Workflow future, which returns a different future. Then get the value of an object that acts as a proxy for a result that is initially unknown, which is what waits until the Child Workflow Execution has spawned. To spawn a Child Workflow Execution in Python, use the [`execute_child_workflow()`](https://python.temporal.io/temporalio.workflow.html#execute_child_workflow) function which starts the Child Workflow and waits for completion or use the [`start_child_workflow()`](https://python.temporal.io/temporalio.workflow.html#start_child_workflow) function to start a Child Workflow and return its handle. This is useful if you want to do something after it has only started, or to get the Workflow/Run ID, or to be able to signal it while running. :::note `execute_child_workflow()` is a helper function for `start_child_workflow()` plus `await handle`. ::: View the source code {' '} in the context of the rest of the application code. ```python --- # ... @workflow.defn class ComposeGreetingWorkflow: @workflow.run async def run(self, input: ComposeGreetingInput) -> str: return f"{input.greeting}, {input.name}!" @workflow.defn class GreetingWorkflow: @workflow.run async def run(self, name: str) -> str: return await workflow.execute_child_workflow( ComposeGreetingWorkflow.run, ComposeGreetingInput("Hello", name), id="hello-child-workflow-workflow-child-id", --- # ... ) ``` ### Set a Parent Close Policy {#parent-close-policy} **How to set a Parent Close Policy** A [Parent Close Policy](/parent-close-policy) determines what happens to a Child Workflow Execution if its Parent changes to a Closed status (Completed, Failed, or Timed Out). The default Parent Close Policy option is set to terminate the Child Workflow Execution. Set the `parent_close_policy` parameter inside the [`start_child_workflow`](https://python.temporal.io/temporalio.workflow.html#start_child_workflow) function or the [`execute_child_workflow()`](https://python.temporal.io/temporalio.workflow.html#execute_child_workflow) function to specify the behavior of the Child Workflow when the Parent Workflow closes. View the source code {' '} in the context of the rest of the application code. ```python from temporalio.workflow import ParentClosePolicy --- # ... @workflow.defn class ComposeGreetingWorkflow: @workflow.run async def run(self, input: ComposeGreetingInput) -> str: return f"{input.greeting}, {input.name}!" @workflow.defn class GreetingWorkflow: @workflow.run async def run(self, name: str) -> str: return await workflow.execute_child_workflow( ComposeGreetingWorkflow.run, ComposeGreetingInput("Hello", name), id="hello-child-workflow-workflow-child-id", parent_close_policy=ParentClosePolicy.ABANDON, ) ``` --- ## Continue-As-New - Python SDK This page answers the following questions for Python developers: - [What is Continue-As-New?](#what) - [How to Continue-As-New?](#how) - [When is it right to Continue-as-New?](#when) - [How to test Continue-as-New?](#how-to-test) ## What is Continue-As-New? {#what} [Continue-As-New](/workflow-execution/continue-as-new) lets a Workflow Execution close successfully and creates a new Workflow Execution. You can think of it as a checkpoint when your Workflow gets too long or approaches certain scaling limits. The new Workflow Execution is in the same [chain](/workflow-execution#workflow-execution-chain); it keeps the same Workflow Id but gets a new Run Id and a fresh Event History. It also receives your Workflow's usual parameters. ## How to Continue-As-New using the Python SDK {#how} First, design your Workflow parameters so that you can pass in the "current state" when you Continue-As-New into the next Workflow run. This state is typically set to `None` for the original caller of the Workflow. View the source code {' '} in the context of the rest of the application code. ```python @dataclass class ClusterManagerInput: state: Optional[ClusterManagerState] = None test_continue_as_new: bool = False @workflow.run async def run(self, input: ClusterManagerInput) -> ClusterManagerResult: ```` The test hook in the above snippet is covered [below](#how-to-test). Inside your Workflow, call the [`continue_as_new()`](https://python.temporal.io/temporalio.workflow.html#continue_as_new) function with the same type. This stops the Workflow right away and starts a new one. View the source code {' '} in the context of the rest of the application code. ```python workflow.continue_as_new( ClusterManagerInput( state=self.state, test_continue_as_new=input.test_continue_as_new, ) ) ```` ### Considerations for Workflows with Message Handlers {#with-message-handlers} If you use Updates or Signals, don't call Continue-as-New from the handlers. Instead, wait for your handlers to finish in your main Workflow before you run `continue_as_new`. See the [`all_handlers_finished`](message-passing#wait-for-message-handlers) example for guidance. ## When is it right to Continue-as-New using the Python SDK? {#when} Use Continue-as-New when your Workflow might encounter degraded performance or [Event History Limits](/workflow-execution/event#event-history). Temporal tracks your Workflow's progress against these limits to let you know when you should Continue-as-New. Call `workflow.info().is_continue_as_new_suggested()` to check if it's time. ## How to test Continue-as-New using the Python SDK {#how-to-test} Testing Workflows that naturally Continue-as-New may be time-consuming and resource-intensive. Instead, add a test hook to check your Workflow's Continue-as-New behavior faster in automated tests. For example, when `test_continue_as_new == True`, this sample creates a test-only variable called `self.max_history_length` and sets it to a small value. A helper method in the Workflow checks it each time it considers using Continue-as-New: View the source code {' '} in the context of the rest of the application code. ```python def should_continue_as_new(self) -> bool: if workflow.info().is_continue_as_new_suggested(): return True # For testing if ( self.max_history_length and workflow.info().get_current_history_length() > self.max_history_length ): return True return False ``` --- ## Converters and encryption - Python SDK Temporal's security model is designed around client-side encryption of Payloads. A client may encrypt Payloads before sending them to the server, and decrypt them after receiving them from the server. This provides a high degree of confidentiality because the Temporal Server itself has absolutely no knowledge of the actual data. It also gives implementers more power and more freedom regarding which client is able to read which data -- they can control access with keys, algorithms, or other security measures. A Temporal developer adds client-side encryption of Payloads by providing a Custom Payload Codec to its Client. Depending on business needs, a complete implementation of Payload Encryption may involve selecting appropriate encryption algorithms, managing encryption keys, restricting a subset of their users from viewing payload output, or a combination of these. The server itself never adds encryption over Payloads. Therefore, unless client-side encryption is implemented, Payload data will be persisted in non-encrypted form to the data store, and any Client that can make requests to a Temporal namespace (including the Temporal UI and CLI) will be able to read Payloads contained in Workflows. When working with sensitive data, you should always implement Payload encryption. ## Custom Payload Codec {#custom-payload-codec} **How to use a custom Payload Codec using Python with the Temporal Python SDK.** Custom Data Converters can change the default Temporal Data Conversion behavior by adding hooks, sending payloads to external storage, or performing different encoding steps. If you only need to change the encoding performed on your payloads -- by adding compression or encryption -- you can override the default Data Converter by creating a new `PayloadCodec`. The `PayloadCodec` needs to implement `encode()` and `decode()` functions at a minimum. These should loop through all of a Workflow's payloads, perform all of your necessary marshaling, compression, or encryption steps in order, and set an `"encoding"` metadata field. In this example, the `encode` method marshals and then compresses a payload using Python's [cramjam](https://github.com/milesgranger/cramjam) library to provide `snappy` compression. The `decode()` function implements the `encode()` logic in reverse: ```python from temporalio.api.common.v1 import Payload from temporalio.converter import PayloadCodec class EncryptionCodec(PayloadCodec): async def encode(self, payloads: Iterable[Payload]) -> List[Payload]: return [ Payload( metadata={ "encoding": b"binary/snappy", }, data=(bytes(cramjam.snappy.compress(p.SerializeToString()))), ) for p in payloads ] async def decode(self, payloads: Iterable[Payload]) -> List[Payload]: ret: List[Payload] = [] for p in payloads: if p.metadata.get("encoding", b"").decode() != "binary/snappy": ret.append(p) continue ret.append(Payload.FromString(bytes(cramjam.snappy.decompress(p.data)))) return ret ``` This example verifies that an encoded payload matches the `binary/snappy` filetype -- i.e., that it was encoded using the same custom `encode()` function -- and if so, performs decompression followed by unmarshaling. **Set Data Converter to use custom Payload Codec** Add a `data_converter` parameter to your `Client.connect()` options that overrides the default Converter with your Payload Codec: ```python from codec import EncryptionCodec client = await Client.connect( "localhost:7233", data_converter=dataclasses.replace( temporalio.converter.default(), payload_codec=EncryptionCodec() ), ) ``` - Data **encoding** is performed by the client using the converters and codecs provided by Temporal or your custom implementation when passing input to the Temporal Cluster. For example, plain text input is usually serialized into a JSON object, and can then be compressed or encrypted. - Data **decoding** may be performed by your application logic during your Workflows or Activities as necessary, but decoded Workflow results are never persisted back to the Temporal Cluster. Instead, they are stored encoded on the Cluster, and you need to provide an additional parameter when using the [temporal workflow show](/cli/workflow#show) command or when browsing the Web UI to view output. For reference, see the [Encryption](https://github.com/temporalio/samples-python/tree/main/encryption) sample. ### Using a Codec Server A Codec Server is an HTTP server that uses your custom Codec logic to decode your data remotely. The Codec Server is independent of the Temporal Cluster and decodes your encrypted payloads through predefined endpoints. You create, operate, and manage access to your Codec Server in your own environment. The Temporal CLI and the Web UI in turn provide built-in hooks to call the Codec Server to decode encrypted payloads on demand. Refer to the [Codec Server](/production-deployment/data-encryption) documentation for information on how to design and deploy a Codec Server. ## Payload conversion Temporal SDKs provide a default [Payload Converter](/payload-converter) that can be customized to convert a custom data type to [Payload](/dataconversion#payload) and back. ### Conversion sequence {#conversion-sequence} The order in which your encoding Payload Converters are applied depends on the order given to the Data Converter. You can set multiple encoding Payload Converters to run your conversions. When the Data Converter receives a value for conversion, it passes through each Payload Converter in sequence until the converter that handles the data type does the conversion. Payload Converters can be customized independently of a Payload Codec. Temporal's Converter architecture looks like this: ### Custom Payload Converter {#custom-payload-converter} **How to use a custom Payload Converter using the Temporal Python SDK.** Data Converters convert raw Temporal payloads to/from actual Python types. A custom Data Converter of type `temporalio.converter.DataConverter` can be set through the `data_converter` parameter of the `Client` constructor. Data Converters are a combination of Payload Converters, Payload Codecs, and Failure Converters. Payload Converters convert Python values to/from serialized bytes. Payload Codecs convert bytes to bytes (for example, for compression or encryption). Failure Converters convert exceptions to/from serialized failures. The default Data Converter supports converting multiple types including: - `None` - `bytes` - `google.protobuf.message.Message` - As JSON when encoding, but has ability to decode binary proto from other languages - Anything that can be converted to JSON including: - Anything that [`json.dump`](https://docs.python.org/3/library/json.html#json.dump) supports natively - [dataclasses](https://docs.python.org/3/library/dataclasses.html) - Iterables including ones JSON dump may not support by default, e.g. `set` - [IntEnum, StrEnum](https://docs.python.org/3/library/enum.html) based enumerates - [UUID](https://docs.python.org/3/library/uuid.html) To use Pydantic model instances, see [Pydantic Support](#pydantic-support). `datetime.date`, `datetime.time`, and `datetime.datetime` can only be used with the Pydantic Data Converter. Although Workflows, Updates, Signals, and Queries can all be defined with multiple input parameters, users are strongly encouraged to use a single `dataclass` or Pydantic model parameter so that fields with defaults can be easily added without breaking compatibility. Similar advice applies to return values. Classes with generics may not have the generics properly resolved. The current implementation does not have generic type resolution. Users should use concrete types. ### Pydantic Support To use Pydantic model instances, install Pydantic and set the Pydantic Data Converter when creating Client instances: ```python from temporalio.contrib.pydantic import pydantic_data_converter client = Client(data_converter=pydantic_data_converter, ...) ``` This Data Converter supports conversion of all [types supported by Pydantic](https://docs.pydantic.dev/latest/api/standard_library_types/) to and from JSON. In addition to Pydantic models, supported types include: - Everything that [`json.dumps()`](https://docs.python.org/3/library/json.html#py-to-json-table) supports by default. - Several standard library types that `json.dumps()` does not support, including dataclasses, types from the datetime module, sets, UUID, etc. - Custom types composed of any of these, with any degree of nesting. For example, a list of Pydantic models with `datetime` fields. See the [Pydantic documentation](https://docs.pydantic.dev/latest/api/standard_library_types/) for full details. :::note Pydantic v1 isn't supported by this Data Converter. If you aren't yet able to upgrade from Pydantic v1, see https://github.com/temporalio/samples-python/tree/main/pydantic_converter/v1 for limited v1 support. ::: ### Custom Type Data Conversion When converting from JSON, Workflow and Activity type hints are taken into account to convert to the proper types. All common Python typings including `Optional`, `Union`, all forms of iterables and mappings, and `NewType` are supported in addition the regular JSON values mentioned before. In Python, Data Converters contain a reference to a Payload Converter class that is used to convert input and output payloads. By default, the Payload Converter is a `CompositePayloadConverter` which contains multiple `EncodingPayloadConverter`s to try to serialize/deserialize payloads. Upon serialization, each `EncodingPayloadConverter` is used in order until one succeeds. To implement a custom encoding for a custom type, a new `EncodingPayloadConverter` can be created. For example, to support `IPv4Address` types: ```python class IPv4AddressEncodingPayloadConverter(EncodingPayloadConverter): @property def encoding(self) -> str: return "text/ipv4-address" def to_payload(self, value: Any) -> Optional[Payload]: if isinstance(value, ipaddress.IPv4Address): return Payload( metadata={"encoding": self.encoding.encode()}, data=str(value).encode(), ) else: return None def from_payload(self, payload: Payload, type_hint: Optional[Type] = None) -> Any: assert not type_hint or type_hint is ipaddress.IPv4Address return ipaddress.IPv4Address(payload.data.decode()) class IPv4AddressPayloadConverter(CompositePayloadConverter): def __init__(self) -> None: # Just add ours as first before the defaults super().__init__( IPv4AddressEncodingPayloadConverter(), *DefaultPayloadConverter.default_encoding_payload_converters, ) my_data_converter = dataclasses.replace( DataConverter.default, payload_converter_className=IPv4AddressPayloadConverter, ) ``` This is good for many custom types. However, you might need to override the behavior of the just the existing JSON encoding payload converter to support a new type. It is already the last encoding data converter in the list, so it's the fall-through behavior for any otherwise unknown type. Customizing the existing JSON converter has the benefit of making the type work in lists, unions, etc. The conversion can be customized for serialization with a custom `json.JSONEncoder` and deserialization with a custom `JSONTypeConverter`. For example, to support `IPv4Address` types in existing JSON conversion: ```python class IPv4AddressJSONEncoder(AdvancedJSONEncoder): def default(self, o: Any) -> Any: if isinstance(o, ipaddress.IPv4Address): return str(o) return super().default(o) class IPv4AddressJSONTypeConverter(JSONTypeConverter): def to_typed_value( self, hint: Type, value: Any ) -> Union[Optional[Any], _JSONTypeConverterUnhandled]: if issubclass(hint, ipaddress.IPv4Address): return ipaddress.IPv4Address(value) return JSONTypeConverter.Unhandled class IPv4AddressPayloadConverter(CompositePayloadConverter): def __init__(self) -> None: # Replace default JSON plain with our own that has our encoder and type # converter json_converter = JSONPlainPayloadConverter( encoder=IPv4AddressJSONEncoder, custom_type_converters=[IPv4AddressJSONTypeConverter()], ) super().__init__( *[ c if not isinstance(c, JSONPlainPayloadConverter) else json_converter for c in DefaultPayloadConverter.default_encoding_payload_converters ] ) my_data_converter = dataclasses.replace( DataConverter.default, payload_converter_className=IPv4AddressPayloadConverter, ) ``` Now `IPv4Address` can be used in type hints including collections, optionals, etc. --- ## Core application - Python SDK This page shows how to do the following: - [Develop a basic Workflow](#develop-workflows) - [Define Workflow parameters](#workflow-parameters) - [Define Workflow return parameters](#workflow-return-values) - [Customize your Workflow Type](#workflow-type) - [Develop Workflow logic](#workflow-logic-requirements) - [Develop a basic Activity](#develop-activities) - [Develop Activity Parameters](#activity-parameters) - [Define Activity return values](#activity-return-values) - [Customize your Activity Type](#activity-type) - [Start an Activity Execution](#activity-execution) - [Set the required Activity Timeouts](#required-timeout) - [Get the results of an Activity Execution](#get-activity-results) - [Run a Worker Process](#run-a-dev-worker) - [Register types](#register-types) ## Develop a basic Workflow {#develop-workflows} **How to develop a basic Workflow using the Temporal Python SDK.** Workflows are the fundamental unit of a Temporal Application, and it all starts with the development of a [Workflow Definition](/workflow-definition). In the Temporal Python SDK programming model, Workflows are defined as classes. Specify the `@workflow.defn` decorator on the Workflow class to identify a Workflow. Use the `@workflow.run` to mark the entry point method to be invoked. This must be set on one asynchronous method defined on the same class as `@workflow.defn`. Run methods have positional parameters. View the source code {' '} in the context of the rest of the application code. ```python from temporalio import workflow --- # ... @workflow.defn(name="YourWorkflow") class YourWorkflow: @workflow.run async def run(self, name: str) -> str: return await workflow.execute_activity( your_activity, YourParams("Hello", name), start_to_close_timeout=timedelta(seconds=10), ) ``` ### Define Workflow parameters {#workflow-parameters} **How to define Workflow parameters using the Temporal Python SDK.** Temporal Workflows may have any number of custom parameters. However, we strongly recommend that objects are used as parameters, so that the object's individual fields may be altered without breaking the signature of the Workflow. All Workflow Definition parameters must be serializable. Workflow parameters are the method parameters of the singular method decorated with `@workflow.run`. These can be any data type Temporal can convert, including [`dataclasses`](https://docs.python.org/3/library/dataclasses.html) when properly type-annotated. Technically this can be multiple parameters, but Temporal strongly encourages a single `dataclass` parameter containing all input fields. View the source code {' '} in the context of the rest of the application code. ```python from dataclasses import dataclass --- # ... @dataclass class YourParams: greeting: str name: str ``` ### Define Workflow return parameters {#workflow-return-values} **How to define Workflow return parameters using the Temporal Python SDK.** Workflow return values must also be serializable. Returning results, returning errors, or throwing exceptions is fairly idiomatic in each language that is supported. However, Temporal APIs that must be used to get the result of a Workflow Execution will only ever receive one of either the result or the error. To return a value of the Workflow, use `return` to return an object. To return the results of a Workflow Execution, use either `start_workflow()` or `execute_workflow()` asynchronous methods. For performance and behavior reasons, users should pass through all modules, including Activities, Nexus services, and third-party plugins, whose calls will be deterministic using [`imports_passed_through`](https://python.temporal.io/temporalio.workflow.unsafe.html#imports_passed_through) or at Worker creation time by customizing the runner's restrictions with [`with_passthrough_modules`](https://python.temporal.io/temporalio.worker.workflow_sandbox.SandboxRestrictions.html#with_passthrough_modules). View the source code {' '} in the context of the rest of the application code. ```python from temporalio import workflow with workflow.unsafe.imports_passed_through(): from your_activities_dacx import your_activity from your_dataobject_dacx import YourParams --- # ... @workflow.defn(name="YourWorkflow") class YourWorkflow: @workflow.run async def run(self, name: str) -> str: return await workflow.execute_activity( your_activity, YourParams("Hello", name), start_to_close_timeout=timedelta(seconds=10), ) ``` ### Customize your Workflow Type {#workflow-type} **How to customize your Workflow Type using the Temporal Python SDK.** Workflows have a Type that are referred to as the Workflow name. The following examples demonstrate how to set a custom name for your Workflow Type. You can customize the Workflow name with a custom name in the decorator argument. For example, `@workflow.defn(name="your-workflow-name")`. If the name parameter is not specified, the Workflow name defaults to the unqualified class name. View the source code {' '} in the context of the rest of the application code. ```python from temporalio import workflow with workflow.unsafe.imports_passed_through(): from your_activities_dacx import your_activity from your_dataobject_dacx import YourParams --- # ... @workflow.defn(name="YourWorkflow") class YourWorkflow: @workflow.run async def run(self, name: str) -> str: return await workflow.execute_activity( your_activity, YourParams("Hello", name), start_to_close_timeout=timedelta(seconds=10), ) ``` ### Develop Workflow logic {#workflow-logic-requirements} **How to develop Workflow logic using the Temporal Python SDK.** Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). Therefore, each language is limited to the use of certain idiomatic techniques. However, each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. Workflow code must be deterministic. This means: - no threading - no randomness - no external calls to processes - no network I/O - no global state mutation - no system date or time All API safe for Workflows used in the [`temporalio.workflow`](https://python.temporal.io/temporalio.workflow.html) must run in the implicit [`asyncio` event loop](https://docs.python.org/3/library/asyncio-eventloop.html) and be _deterministic_. ## Develop a basic Activity {#develop-activities} **How to develop a basic Activity using the Temporal Python SDK.** One of the primary things that Workflows do is orchestrate the execution of Activities. An Activity is a normal function or method execution that's intended to execute a single, well-defined action (either short or long-running), such as querying a database, calling a third-party API, or transcoding a media file. An Activity can interact with world outside the Temporal Platform or use a Temporal Client to interact with a Temporal Service. For the Workflow to be able to execute the Activity, we must define the [Activity Definition](/activity-definition). You can develop an Activity Definition by using the `@activity.defn` decorator. Register the function as an Activity with a custom name through a decorator argument, for example `@activity.defn(name="your_activity")`. :::note The Temporal Python SDK supports multiple ways of implementing an Activity: - Asynchronously using [`asyncio`](https://docs.python.org/3/library/asyncio.html) - Synchronously multithreaded using [`concurrent.futures.ThreadPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor) - Synchronously multiprocess using [`concurrent.futures.ProcessPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor) and [`multiprocessing.managers.SyncManager`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.managers.SyncManager) Blocking the async event loop in Python would turn your asynchronous program into a synchronous program that executes serially, defeating the entire purpose of using `asyncio`. This can also lead to potential deadlock, and unpredictable behavior that causes tasks to be unable to execute. Debugging these issues can be difficult and time consuming, as locating the source of the blocking call might not always be immediately obvious. Due to this, consider not make blocking calls from within an asynchronous Activity, or use an async safe library to perform these actions. If you must use a blocking library, consider using a synchronous Activity instead. ::: View the source code {' '} in the context of the rest of the application code. ```python from temporalio import activity --- # ... @activity.defn(name="your_activity") async def your_activity(input: YourParams) -> str: return f"{input.greeting}, {input.name}!" ``` ### Develop Activity Parameters {#activity-parameters} **How to develop Activity Parameters using the Temporal Python SDK.** There is no explicit limit to the total number of parameters that an [Activity Definition](/activity-definition) may support. However, there is a limit to the total size of the data that ends up encoded into a gRPC message Payload. A single argument is limited to a maximum size of 2 MB. And the total size of a gRPC message, which includes all the arguments, is limited to a maximum of 4 MB. Also, keep in mind that all Payload data is recorded in the [Workflow Execution Event History](/workflow-execution/event#event-history) and large Event Histories can affect Worker performance. This is because the entire Event History could be transferred to a Worker Process with a [Workflow Task](/tasks#workflow-task). {/* TODO link to gRPC limit section when available */} Some SDKs require that you pass context objects, others do not. When it comes to your application data—that is, data that is serialized and encoded into a Payload—we recommend that you use a single object as an argument that wraps the application data passed to Activities. This is so that you can change what data is passed to the Activity without breaking a function or method signature. Activity parameters are the function parameters of the function decorated with `@activity.defn`. These can be any data type Temporal can convert, including dataclasses when properly type-annotated. Technically this can be multiple parameters, but Temporal strongly encourages a single dataclass parameter containing all input fields. View the source code {' '} in the context of the rest of the application code. ```python from temporalio import activity from your_dataobject_dacx import YourParams --- # ... @activity.defn(name="your_activity") async def your_activity(input: YourParams) -> str: return f"{input.greeting}, {input.name}!" ``` ### Define Activity return values {#activity-return-values} **How to define Activity return values using the Temporal Python SDK.** All data returned from an Activity must be serializable. Activity return values are subject to payload size limits in Temporal. The default payload size limit is 2MB, and there is a hard limit of 4MB for any gRPC message size in the Event History transaction ([see Cloud limits here](https://docs.temporal.io/cloud/limits#per-message-grpc-limit)). Keep in mind that all return values are recorded in a [Workflow Execution Event History](/workflow-execution/event#event-history). An Activity Execution can return inputs and other Activity values. The following example defines an Activity that takes an object as input and returns a string. View the source code {' '} in the context of the rest of the application code. ```python --- # ... @activity.defn(name="your_activity") async def your_activity(input: YourParams) -> str: return f"{input.greeting}, {input.name}!" ``` ### Customize your Activity Type {#activity-type} **How to customize your Activity Type** Activities have a Type that are referred to as the Activity name. The following examples demonstrate how to set a custom name for your Activity Type. You can customize the Activity name with a custom name in the decorator argument. For example, `@activity.defn(name="your-activity")`. If the name parameter is not specified, the Activity name defaults to the function name. View the source code {' '} in the context of the rest of the application code. ```python --- # ... @activity.defn(name="your_activity") async def your_activity(input: YourParams) -> str: return f"{input.greeting}, {input.name}!" ``` ## Start an Activity Execution {#activity-execution} **How to start an Activity Execution using the Temporal Python SDK.** Calls to spawn [Activity Executions](/activity-execution) are written within a [Workflow Definition](/workflow-definition). The call to spawn an Activity Execution generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command. This results in the set of three [Activity Task](/tasks#activity-task) related Events ([ActivityTaskScheduled](/references/events#activitytaskscheduled), [ActivityTaskStarted](/references/events#activitytaskstarted), and ActivityTask[Closed])in your Workflow Execution Event History. A single instance of the Activities implementation is shared across multiple simultaneous Activity invocations. Activity implementation code should be _idempotent_. The values passed to Activities through invocation parameters or returned through a result value are recorded in the Execution history. The entire Execution history is transferred from the Temporal service to Workflow Workers when a Workflow state needs to recover. A large Execution history can thus adversely impact the performance of your Workflow. Therefore, be mindful of the amount of data you transfer through Activity invocation parameters or Return Values. Otherwise, no additional limitations exist on Activity implementations. To spawn an Activity Execution, use the [`execute_activity()`](https://python.temporal.io/temporalio.workflow.html#execute_activity) operation from within your Workflow Definition. `execute_activity()` is a shortcut for [`start_activity()`](https://python.temporal.io/temporalio.workflow.html#start_activity) that waits on its result. To get just the handle to wait and cancel separately, use `start_activity()`. In most cases, use `execute_activity()` unless advanced task capabilities are needed. A single argument to the Activity is positional. Multiple arguments are not supported in the type-safe form of `start_activity()` or `execute_activity()` and must be supplied by the `args` keyword argument. View the source code {' '} in the context of the rest of the application code. ```python from temporalio import workflow with workflow.unsafe.imports_passed_through(): from your_activities_dacx import your_activity from your_dataobject_dacx import YourParams --- # ... @workflow.defn(name="YourWorkflow") class YourWorkflow: @workflow.run async def run(self, name: str) -> str: return await workflow.execute_activity( your_activity, YourParams("Hello", name), start_to_close_timeout=timedelta(seconds=10), ) ``` ### Set the required Activity Timeouts {#required-timeout} **How to set the required Activity Timeouts using the Temporal Python SDK.** Activity Execution semantics rely on several parameters. The only required value that needs to be set is either a [Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout) or a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout). These values are set in the Activity Options. Activity options are set as keyword arguments after the Activity arguments. Available timeouts are: - schedule_to_close_timeout - schedule_to_start_timeout - start_to_close_timeout View the source code {' '} in the context of the rest of the application code. ```python --- # ... activity_timeout_result = await workflow.execute_activity( your_activity, YourParams(greeting, "Activity Timeout option"), # Activity Execution Timeout start_to_close_timeout=timedelta(seconds=10), # schedule_to_start_timeout=timedelta(seconds=10), # schedule_to_close_timeout=timedelta(seconds=10), ) ``` ### Get the results of an Activity Execution {#get-activity-results} **How to get the results of an Activity Execution using the Temporal Python SDK.** The call to spawn an [Activity Execution](/activity-execution) generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command and provides the Workflow with an Awaitable. Workflow Executions can either block progress until the result is available through the Awaitable or continue progressing, making use of the result when it becomes available. Use [`start_activity()`](https://python.temporal.io/temporalio.workflow.html#start_activity) to start an Activity and return its handle, [`ActivityHandle`](https://python.temporal.io/temporalio.workflow.ActivityHandle.html). Use [`execute_activity()`](https://python.temporal.io/temporalio.workflow.html#execute_activity) to return the results. You must provide either `schedule_to_close_timeout` or `start_to_close_timeout`. `execute_activity()` is a shortcut for `await start_activity()`. An asynchronous `execute_activity()` helper is provided which takes the same arguments as `start_activity()` and `await`s on the result. `execute_activity()` should be used in most cases unless advanced task capabilities are needed. View the source code {' '} in the context of the rest of the application code. ```python from temporalio import workflow with workflow.unsafe.imports_passed_through(): from your_activities_dacx import your_activity from your_dataobject_dacx import YourParams --- # ... @workflow.defn(name="YourWorkflow") class YourWorkflow: @workflow.run async def run(self, name: str) -> str: return await workflow.execute_activity( your_activity, YourParams("Hello", name), start_to_close_timeout=timedelta(seconds=10), ) ``` ## Run a Worker Process {#run-a-dev-worker} **How to run a Worker Process using the Temporal Python SDK.** The [Worker Process](/workers#worker-process) is where Workflow Functions and Activity Functions are executed. - Each [Worker Entity](/workers#worker-entity) in the Worker Process must register the exact Workflow Types and Activity Types it may execute. - Each Worker Entity must also associate itself with exactly one [Task Queue](/task-queue). - Each Worker Entity polling the same Task Queue must be registered with the same Workflow Types and Activity Types. A [Worker Entity](/workers#worker-entity) is the component within a Worker Process that listens to a specific Task Queue. Although multiple Worker Entities can be in a single Worker Process, a single Worker Entity Worker Process may be perfectly sufficient. For more information, see the [Worker tuning guide](/develop/worker-performance). A Worker Entity contains a Workflow Worker and/or an Activity Worker, which makes progress on Workflow Executions and Activity Executions, respectively. To develop a Worker, use the `Worker()` constructor and add your Client, Task Queue, Workflows, and Activities as arguments. The following code example creates a Worker that polls for tasks from the Task Queue and executes the Workflow. When a Worker is created, it accepts a list of Workflows in the workflows parameter, a list of Activities in the activities parameter, or both. View the source code {' '} in the context of the rest of the application code. ```python from temporalio.client import Client from temporalio.worker import Worker --- # ... async def main(): client = await Client.connect("localhost:7233") worker = Worker( client, task_queue="your-task-queue", workflows=[YourWorkflow], activities=[your_activity], ) await worker.run() if __name__ == "__main__": asyncio.run(main()) ``` ### Register types {#register-types} **How to register types using the Temporal Python SDK.** All Workers listening to the same Task Queue name must be registered to handle the exact same Workflows Types and Activity Types. If a Worker polls a Task for a Workflow Type or Activity Type it does not know about, it fails that Task. However, the failure of the Task does not cause the associated Workflow Execution to fail. When a `Worker` is created, it accepts a list of Workflows in the `workflows` parameter, a list of Activities in the `activities` parameter, or both. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def main(): client = await Client.connect("localhost:7233") worker = Worker( client, task_queue="your-task-queue", workflows=[YourWorkflow], activities=[your_activity], ) await worker.run() if __name__ == "__main__": asyncio.run(main()) ``` --- ## Debugging - Python SDK This page shows how to do the following: - [Debug in a development environment](#debug-in-a-development-environment) - [Debug in a production environment](#debug-in-a-production-environment) ### Debug in a development environment {#debug-in-a-development-environment} **How to debug in a development environment using the Temporal Python SDK.** When developing Workflows, you can use the normal development tools of logging and a debugger to see what’s happening in your Workflow. In addition to the normal development tools of logging and a debugger, you can also see what’s happening in your Workflow by using the [Web UI](/web-ui) or [Temporal CLI](/cli). ### How to debug in a production environment {#debug-in-a-production-environment} **How to debug in a production environment using the Temporal Python SDK.** You can debug production Workflows using: - [Web UI](/web-ui) - [Temporal CLI](/cli) - [Replay](/develop/python/testing-suite#replay) - [Tracing](/develop/python/observability#tracing) - [Logging](/develop/python/observability#logging) You can debug and tune Worker performance with metrics and the [Worker performance guide](/develop/worker-performance). For more information, see [Observability ▶️ Metrics](/develop/python/observability#metrics) for setting up SDK metrics. Debug Server performance with [Cloud metrics](/cloud/metrics/) or [self-hosted Server metrics](/self-hosted-guide/production-checklist#scaling-and-metrics). --- ## Enriching the User Interface - Python SDK Temporal supports adding context to Workflows and events with metadata. This helps users identify and understand Workflows and their operations. ## Adding Summary and Details to Workflows ### Starting a Workflow When starting a Workflow, you can provide a static summary and details to help identify the Workflow in the UI: ```python --- # Start a Workflow with static summary and details handle = await client.start_workflow( YourWorkflow.run, "workflow input", id="your-workflow-id", task_queue="your-task-queue", static_summary="Order processing for customer #12345", static_details="Processing premium order with expedited shipping" ) ``` `static_summary` is a single-line description that appears in the Workflow list view, limited to 200 bytes. `static_details` can be multi-line and provides more comprehensive information that appears in the Workflow details view, with a larger limit of 20K bytes. The input format is standard Markdown excluding images, HTML, and scripts. You can also use the `execute_workflow` method with the same parameters: ```python result = await client.execute_workflow( YourWorkflow.run, "workflow input", id="your-workflow-id", task_queue="your-task-queue", static_summary="Order processing for customer #12345", static_details="Processing premium order with expedited shipping" ) ``` ### Inside the Workflow Within a Workflow, you can get and set the _current Workflow details_. Unlike static summary/details set at Workflow start, this value can be updated throughout the life of the Workflow. Current Workflow details also takes Markdown format (excluding images, HTML, and scripts) and can span multiple lines. ```python @workflow.defn class YourWorkflow: @workflow.run async def run(self, input: str) -> str: # Get the current details current_details = workflow.get_current_details() print(f"Current details: {current_details}") # Set/update the current details workflow.set_current_details("Updated workflow details with new status") return "Workflow completed" ``` ### Adding Summary to Activities and Timers You can attach a metadata parameter `summary` to Activities when starting them from within a Workflow: ```python @workflow.defn class YourWorkflow: @workflow.run async def run(self, input: str) -> str: # Start an activity with a summary result = await workflow.execute_activity( your_activity, input, start_to_close_timeout=timedelta(seconds=10), summary="Processing user data" ) return result ``` Similarly, you can attach a `summary` to Timers within a Workflow: ```python @workflow.defn class YourWorkflow: @workflow.run async def run(self, input: str) -> str: # Create a timer with a summary await workflow.sleep(timedelta(minutes=5), summary="Waiting for payment confirmation") return "Timer completed" ``` The input format for `summary` is a string, and limited to 200 bytes. ## Viewing Summary and Details in the UI Once you've added summaries and details to your Workflows, Activities, and Timers, you can view this enriched information in the Temporal Web UI. Navigate to your Workflow's details page to see the metadata displayed in two key locations: ### Workflow Overview Section At the top of the Workflow details page, you'll find the Workflow-level metadata: - **Summary & Details** - Displays the static summary and static details set when starting the Workflow - **Current Details** - Displays the dynamic details that can be updated during Workflow execution All Workflow details support standard Markdown formatting (excluding images, HTML, and scripts), allowing you to create rich, structured information displays. ### Event History Individual events in the Workflow's Event History display their associated summaries when available. Workflow, Activity and Timer summaries appear in purple text next to their corresponding events, providing immediate context without requiring you to expand the Event details. When you do expand an Event, the summary is also prominently displayed in the detailed view. --- ## Error handling - Python SDK Temporal automatically handles many types of failures through retries and Durable Execution. This page shows you how to build on these capabilities to create robust error handling for your applications. **Key concepts:** Not all failures should be handled the same way. **Transient failures** (like brief network hiccups) resolve on their own and should be retried immediately. **Intermittent failures** (like rate limiting) need increasing delays between retries. **Permanent failures** (like invalid input) won't resolve through retries and need different data or code changes. Temporal distinguishes between **Workflow Task failures** (bugs that can be fixed with redeployment) and **Workflow Execution failures** (business logic failures that should stop the Workflow). Task failures retry automatically so you can fix and redeploy without losing state. Execution failures require you to explicitly raise an `ApplicationError`. This page shows how to: - [Make Activities idempotent](#make-activities-idempotent) - [Raise exceptions from Activities](#raise-exceptions-from-activities) - [Raise exceptions from Workflows](#raise-exceptions-from-workflows) - [Handle exceptions in Workflows](#handle-exceptions-in-workflows) - [Configure custom Retry Policies](#configure-custom-retry-policies) - [Mark specific errors as non-retryable](#mark-errors-as-non-retryable) - [Specify non-retryable error types in Retry Policies](#specify-non-retryable-error-types) - [Implement rollback logic with the Saga pattern](#implement-saga-pattern) - [Understand Temporal's failure types](#understand-failure-types) ## Make Activities idempotent {#make-activities-idempotent} **How to make Activities idempotent using the Temporal Python SDK** Because Activities may be retried due to failures, it's strongly recommended to make them idempotent. An idempotent operation produces the same result whether executed once or multiple times. Activities follow an at-least-once execution model. If a Worker executes an Activity successfully but crashes before notifying the Temporal Service, the Activity will be retried. Without idempotence, this could cause duplicate charges in payment processing or create duplicate resources in infrastructure provisioning. ### Use idempotency keys Most external services support idempotency keys—unique identifiers that prevent duplicate operations. When the service receives a request with a key it has already processed, it returns the original result instead of performing the operation again. Create an idempotency key by combining the Workflow Run ID and Activity ID: ```python from temporalio import activity @activity.defn async def process_payment(amount: float, account: str): info = activity.info() idempotency_key = f"{info.workflow_run_id}-{info.activity_id}" # Pass idempotency_key to your payment service result = await payment_service.charge( amount=amount, account=account, idempotency_key=idempotency_key ) return result ``` This value remains constant across Activity retries but is unique among all Workflow Executions. ### Design Activities to be atomic Activities are atomic—they either complete successfully or not. If an Activity performs multiple steps and the last step fails, the entire Activity is retried. Consider this Activity: 1. Look up data in database 2. Call microservice with the data 3. Write result to filesystem If step 3 fails, all three steps execute again on retry. You might split this into three separate Activities so only the failed step retries, but balance this against having a larger Event History with more Activity Executions. ## Raise exceptions from Activities {#raise-exceptions-from-activities} **How to raise exceptions from Activities using the Temporal Python SDK** Use `ApplicationError` to communicate application-specific failures from Activities. Temporal converts any Python exception raised in an Activity to an `ApplicationError`, but raising it explicitly gives you more control. ```python from temporalio import activity from temporalio.exceptions import ApplicationError @activity.defn async def validate_charge(credit_card_number: str, amount: float): if not is_valid_card(credit_card_number): raise ApplicationError( f"Invalid credit card number: {credit_card_number}", type="InvalidCreditCard", ) if amount <= 0: raise ApplicationError( f"Amount must be positive, got {amount}", type="InvalidAmount", ) return True ``` When raising an `ApplicationError`: - Provide a descriptive `message` - Optionally provide a `type` string to categorize the failure - The error appears in the Event History as an `ActivityTaskFailed` event When an Activity fails, Temporal wraps the exception in an `ActivityError` before surfacing it to the Workflow. The `ActivityError` provides context including: - Activity type that failed - Number of retry attempts - Original cause (the `ApplicationError` you raised, or `TimeoutError`, `CancelledError`, etc.) ## Raise exceptions from Workflows {#raise-exceptions-from-workflows} **How to raise exceptions from Workflows using the Temporal Python SDK** The behavior depends on what exception you raise: ### Fail a Workflow Execution To deliberately fail a Workflow Execution, raise an `ApplicationError`: ```python from temporalio import workflow from temporalio.exceptions import ApplicationError @workflow.defn class PizzaDeliveryWorkflow: @workflow.run async def run(self, order): distance = await workflow.execute_activity( calculate_distance, order.address, start_to_close_timeout=timedelta(seconds=10) ) if order.is_delivery and distance.kilometers > 25: workflow.logger.error("Customer outside service area") raise ApplicationError( "Customer lives outside the service area", type="CustomerOutsideServiceArea" ) # Continue with order... ``` This puts the Workflow Execution in "Failed" state with no automatic retries. Use this for permanent failures where retrying won't help—like the customer being too far away. ### Trigger a Workflow Task retry Raising any other Python exception (like `ValueError` or `TypeError`) causes a Workflow Task failure, which retries automatically: ```python --- # This causes a Workflow Task failure (retries automatically) raise ValueError("Unexpected condition") ``` This is intentional. Regular Python exceptions are treated as bugs that can be fixed with a code deployment, not business logic failures. The Workflow Task retries indefinitely, letting you fix the bug and redeploy without losing Workflow state. ## Handle exceptions in Workflows {#handle-exceptions-in-workflows} **How to handle exceptions in Workflows using the Temporal Python SDK** Use Python's `try/except` blocks to handle Activity failures in your Workflow: ```python from temporalio import workflow from temporalio.exceptions import ActivityError, ApplicationError from datetime import timedelta @workflow.defn class MoneyTransferWorkflow: @workflow.run async def run(self, details): # Withdraw money try: withdraw_result = await workflow.execute_activity( withdraw, details, start_to_close_timeout=timedelta(seconds=10) ) except ActivityError as e: raise ApplicationError( f"Withdrawal failed: {e.cause}", type="WithdrawalError" ) # Deposit money try: deposit_result = await workflow.execute_activity( deposit, details, start_to_close_timeout=timedelta(seconds=10) ) except ActivityError as e: # Deposit failed - attempt refund try: await workflow.execute_activity( refund, withdraw_result, start_to_close_timeout=timedelta(seconds=10) ) raise ApplicationError( f"Deposit failed but money refunded to source account", type="DepositError" ) except ActivityError as refund_err: raise ApplicationError( f"Deposit failed and refund also failed: {refund_err.cause}", type="CriticalTransferError" ) return f"Transfer complete: {withdraw_result}, {deposit_result}" ``` Common Temporal exceptions you can catch in Workflows: - `ActivityError` - Activity failed after exhausting retries - `ChildWorkflowError` - Child Workflow failed - `CancelledError` - Workflow, Activity, or Timer was canceled - `TimeoutError` - Operation exceeded timeout If these exceptions propagate unhandled, the Workflow Execution fails (or enters "Canceled" state for `CancelledError`). ## Configure custom Retry Policies {#configure-custom-retry-policies} **How to configure custom Retry Policies using the Temporal Python SDK** Activities have a default Retry Policy with unlimited attempts and exponential backoff. Customize this to match your expected failure patterns. ```python from temporalio import workflow from temporalio.common import RetryPolicy from datetime import timedelta @workflow.defn class OrderWorkflow: @workflow.run async def run(self, order): # Custom retry for rate-limited service retry_policy = RetryPolicy( initial_interval=timedelta(seconds=10), backoff_coefficient=3.0, maximum_interval=timedelta(minutes=5), maximum_attempts=20, ) result = await workflow.execute_activity( call_external_service, order, start_to_close_timeout=timedelta(seconds=30), retry_policy=retry_policy, ) return result ``` Retry Policy attributes: - **`initial_interval`**: Delay before first retry (default: 1 second) - **`backoff_coefficient`**: Multiplier for subsequent delays (default: 2.0) - **`maximum_interval`**: Cap on retry delay (default: 100× initial interval) - **`maximum_attempts`**: Maximum retry attempts (default: unlimited) - **`non_retryable_error_types`**: Error types that shouldn't retry (default: empty) ### Match your Retry Policy to failure types **For transient failures** (brief network issues): Use the defaults or a low `initial_interval` and `backoff_coefficient`. **For intermittent failures** (rate limiting): Increase `initial_interval` and `backoff_coefficient` to space out retries and let the condition resolve. **For cost-sensitive APIs**: Set `maximum_attempts` to limit retries (rare—usually prefer timeouts). ### Use different policies for different Activities You can use different Retry Policies for different Activities, or even multiple policies for the same Activity: ```python fast_retry = RetryPolicy( initial_interval=timedelta(seconds=1), backoff_coefficient=1.5, ) slow_retry = RetryPolicy( initial_interval=timedelta(seconds=30), backoff_coefficient=3.0, ) --- # Same Activity, different policies await workflow.execute_activity( process_order, order, start_to_close_timeout=timedelta(seconds=10), retry_policy=fast_retry, ) --- # Later, with different circumstances... await workflow.execute_activity( process_order, order, start_to_close_timeout=timedelta(seconds=10), retry_policy=slow_retry, ) ``` ### Don't use Workflow Retry Policies Unlike Activities, Workflows don't retry by default, and you usually shouldn't add a Retry Policy. Workflows are deterministic and not designed for failure-prone operations. A Workflow failure typically indicates a code bug or bad input data—retrying the entire Workflow repeats the same logic without fixing the underlying issue. If you need retry logic for specific Workflow operations, implement it in your Workflow code rather than using a Workflow Retry Policy. ## Mark specific errors as non-retryable {#mark-errors-as-non-retryable} **How to mark specific errors as non-retryable using the Temporal Python SDK** Some failures are permanent and won't resolve through retries. Mark these as non-retryable to fail fast instead of waiting for timeouts. Set the `non_retryable` flag when raising an `ApplicationError`: ```python from temporalio import activity from temporalio.exceptions import ApplicationError @activity.defn async def process_payment(card_number: str, amount: float): if not is_valid_card_format(card_number): # Invalid format will never become valid through retries raise ApplicationError( f"Invalid credit card format: {card_number}", type="InvalidCardFormat", non_retryable=True, ) if amount <= 0: # Invalid amount won't be fixed by retrying raise ApplicationError( f"Amount must be positive: {amount}", type="InvalidAmount", non_retryable=True, ) # Process payment... ``` An `ApplicationError` with `non_retryable=True` will never retry, regardless of the Retry Policy. Use non-retryable errors for: - Invalid input data that prevents the Activity from proceeding - Business rule violations - Authorization failures **Use this sparingly.** In most cases, it's better to let the Retry Policy handle when to stop retrying based on time or attempts. ## Specify non-retryable error types {#specify-non-retryable-error-types} **How to specify non-retryable error types in Retry Policies using the Temporal Python SDK** Sometimes you want the Workflow (caller) to decide which error types shouldn't retry, rather than the Activity (implementer). List error types that shouldn't retry in your Retry Policy: ```python from temporalio import workflow from temporalio.common import RetryPolicy from datetime import timedelta @workflow.defn class CheckoutWorkflow: @workflow.run async def run(self, payment_details): retry_policy = RetryPolicy( non_retryable_error_types=[ "InvalidCardFormat", "InsufficientFunds", "AccountClosed", ] ) try: result = await workflow.execute_activity( process_payment, payment_details, start_to_close_timeout=timedelta(seconds=30), retry_policy=retry_policy, ) return result except ActivityError as e: workflow.logger.error(f"Payment failed: {e.cause}") # Handle the non-retryable error... ``` When an Activity raises an `ApplicationError`, Temporal checks if its `type` is in `non_retryable_error_types`. If it matches, the Activity fails immediately without retries. ### When to use each approach **`non_retryable=True` in the Activity**: Use when the Activity implementer knows the error is permanently unrecoverable. This enforces the constraint for all callers. **`non_retryable_error_types` in the Retry Policy**: Use when the caller wants to decide which errors are unrecoverable based on their business logic. This lets different Workflows make different decisions about the same Activity. ## Implement rollback logic with the Saga pattern {#implement-saga-pattern} **How to implement the Saga pattern using the Temporal Python SDK** The Saga pattern coordinates a sequence of operations where each operation has a compensating action to undo its effects. If any operation fails, execute compensating actions in reverse order to roll back previous operations. Use this for multi-step processes like: - E-commerce checkout (payment, inventory, shipping) - Distributed transactions across services - Multi-stage data updates ```python from temporalio import workflow from temporalio.exceptions import ActivityError from datetime import timedelta @workflow.defn class OrderWorkflow: @workflow.run async def run(self, order): compensations = [] try: # Reserve inventory compensations.append({ "activity": revert_inventory, "input": order }) await workflow.execute_activity( reserve_inventory, order, start_to_close_timeout=timedelta(seconds=10), ) # Charge payment compensations.append({ "activity": refund_payment, "input": order }) payment_id = await workflow.execute_activity( charge_payment, order, start_to_close_timeout=timedelta(seconds=10), ) # Create shipment compensations.append({ "activity": cancel_shipment, "input": payment_id }) shipment_id = await workflow.execute_activity( create_shipment, order, start_to_close_timeout=timedelta(seconds=10), ) return {"payment_id": payment_id, "shipment_id": shipment_id} except ActivityError as e: workflow.logger.error(f"Order failed: {e.cause}, rolling back...") # Execute compensations in reverse order for compensation in reversed(compensations): try: await workflow.execute_activity( compensation["activity"], compensation["input"], start_to_close_timeout=timedelta(seconds=10), ) except ActivityError as comp_err: # Log compensation failure but continue with others workflow.logger.error(f"Compensation failed: {comp_err.cause}") # Re-raise the original error raise ApplicationError( f"Order failed: {e.cause}", type="OrderFailed" ) ``` Key points: - Add compensating actions to a list **before** executing each Activity - Use `reversed(compensations)` to undo operations in the correct order - Handle compensation failures gracefully (they might fail too) - Temporal manages all state and retry logic, making Saga implementation straightforward ## Understand Temporal's failure types {#understand-failure-types} Temporal uses specialized exception types to represent different failure scenarios. All exceptions inherit from [`TemporalError`](https://python.temporal.io/temporalio.exceptions.TemporalError.html). **Do not extend `TemporalError` or its children.** Use the provided exception types to ensure: - Consistent behavior across process and language boundaries - Compatibility with the Temporal Service - Proper serialization via Protocol Buffers ### Common failure types **`ApplicationError`**: Raised by your code to indicate application-specific failures. This is the only Temporal exception you should raise manually. When you raise an `ApplicationError`, you can optionally provide a `type` string and mark it as `non_retryable`. **`ActivityError`**: Wraps exceptions raised from Activities. The `cause` field contains the original error (`ApplicationError`, `TimeoutError`, `CancelledError`, etc.). Catch this in Workflows to handle Activity failures. **`TimeoutError`**: Occurs when an Activity or Workflow exceeds its configured timeout. **`CancelledError`**: Results from cancellation of a Workflow, Activity, or Timer. You can catch and ignore this to continue execution despite cancellation. **`TerminatedError`**: Occurs when a Workflow Execution is forcefully terminated. **`ChildWorkflowError`**: Raised when a Child Workflow Execution fails. **`WorkflowAlreadyStartedError`**: Raised when attempting to start a Workflow with an ID that's already running. **`ServerError`**: Used for exceptions from the Temporal Service itself (like database failures). ### Workflow Task vs Workflow Execution failures **Workflow Task failures** occur when Workflow code raises a non-Temporal exception (like `ValueError`, `TypeError`, or non-determinism errors). These retry automatically, letting you fix bugs and redeploy without losing Workflow state. **Workflow Execution failures** occur when Workflow code raises a Temporal exception like `ApplicationError`. These put the Workflow in "Failed" state with no automatic retries. Example of a permanent failure that should fail the Workflow: ```python if distance.kilometers > MAX_DELIVERY_DISTANCE: # Retrying won't change the distance - this is permanent raise ApplicationError( "Customer lives outside service area", type="OutsideServiceArea" ) ``` ### Protecting sensitive information The default Failure Converter copies exception messages and stack traces as plain text visible in the Web UI. If your exceptions might contain sensitive information, configure a custom Failure Converter to encrypt this data. See the [Securing Application Data course](https://learn.temporal.io/courses/appdatasec/) for details. --- ## Failure detection - Python SDK This page shows how to do the following: - [Raise and Handle Exceptions](#exception-handling) - [Deliberately Fail Workflows](#workflow-failure) - [Set Workflow Timeouts](#workflow-timeouts) - [Set Workflow Retries](#workflow-retries) - [Set Activity Timeouts](#activity-timeouts) - [Set an Activity Retry Policy](#activity-retries) - [Heartbeat an Activity](#activity-heartbeats) ## Raise and Handle Exceptions {#exception-handling} In each Temporal SDK, error handling is implemented idiomatically, following the conventions of the language. Temporal uses several different error classes internally — for example, [`CancelledError`](https://python.temporal.io/temporalio.exceptions.CancelledError.html) in the Python SDK, to handle a Workflow cancellation. You should not raise or otherwise implement these manually, as they are tied to Temporal platform logic. The one Temporal error class that you will typically raise deliberately is [`ApplicationError`](https://python.temporal.io/temporalio.exceptions.ApplicationError.html). In fact, *any* other exceptions that are raised from your Python code in a Temporal Activity will be converted to an `ApplicationError` internally. This way, an error's type, severity, and any additional details can be sent to the Temporal Service, indexed by the Web UI, and even serialized across language boundaries. In other words, these two code samples do the same thing: ```python class MyCustomError(Exception): def __init__(self, message, error_code): super().__init__(message) self.error_code = error_code def __str__(self): return f"{self.message} (Error Code: {self.error_code})" @activity.defn async def my_activity(input: MyActivityInput): try: # Your activity logic goes here except Exception as e: raise MyCustomError( f"Error encountered on attempt {attempt}", ) from e ``` ```python from temporalio.exceptions import ApplicationError @activity.defn async def my_activity(input: MyActivityInput): try: # Your activity logic goes here except Exception as e: raise ApplicationError( type="MyCustomError", message=f"Error encountered on attempt {attempt}", ) from e ``` Depending on your implementation, you may decide to use either method. One reason to use the Temporal `ApplicationError` class is because it allows you to set an additional `non_retryable` parameter. This way, you can decide whether an error should not be retried automatically by Temporal. This can be useful for deliberately failing a Workflow due to bad input data, rather than waiting for a timeout to elapse: ```python from temporalio.exceptions import ApplicationError @activity.defn async def my_activity(input: MyActivityInput): try: # Your activity logic goes here except Exception as e: raise ApplicationError( type="MyNonRetryableError", message=f"Error encountered on attempt {attempt}", non_retryable=True, ) from e ``` You can alternately specify a list of errors that are non-retryable in your Activity [Retry Policy](#activity-retries). ## Failing Workflows {#workflow-failure} One of the core design principles of Temporal is that an Activity Failure will never directly cause a Workflow Failure — a Workflow should never return as Failed unless deliberately. The default retry policy associated with Temporal Activities is to retry them until reaching a certain timeout threshold. Activities will not actually *return* a failure to your Workflow until this condition or another non-retryable condition is met. At this point, you can decide how to handle an error returned by your Activity the way you would in any other program. For example, you could implement a [Saga Pattern](https://learn.temporal.io/tutorials/python/trip-booking-app/) that uses `try` and `except` blocks to "unwind" some of the steps your Workflow has performed up to the point of Activity Failure. **You will only fail a Workflow by manually raising an `ApplicationError` from the Workflow code.** You could do this in response to an Activity Failure, if the failure of that Activity means that your Workflow should not continue: ```python try: credit_card_confirmation = await workflow.execute_activity_method() except ActivityError as e: workflow.logger.error(f"Unable to process credit card {e.message}") raise ApplicationError( "Unable to process credit card", "CreditCardProcessingError" ) ``` This works differently in a Workflow than raising exceptions from Activities. In an Activity, any Python exceptions or custom exceptions are converted to a Temporal `ApplicationError`. In a Workflow, any exceptions that are raised other than an explicit Temporal `ApplicationError` will only fail that particular [Workflow Task](https://docs.temporal.io/tasks#workflow-task-execution) and be retried. This includes any typical Python runtime errors like a `NameError` or a `TypeError` that are raised automatically. These errors are treated as bugs that can be corrected with a fixed deployment, rather than a reason for a Temporal Workflow Execution to return unexpectedly. ## Workflow timeouts {#workflow-timeouts} **How to set Workflow timeouts using the Temporal Python SDK** Each Workflow timeout controls the maximum duration of a different aspect of a Workflow Execution. Before we continue, we want to note that we generally do not recommend setting Workflow Timeouts, because Workflows are designed to be long-running and resilient. Instead, setting a Timeout can limit its ability to handle unexpected delays or long-running processes. If you need to perform an action inside your Workflow after a specific period of time, we recommend using a Timer. Workflow timeouts are set when [starting the Workflow Execution](#workflow-timeouts). - **[Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout)** - restricts the maximum amount of time that a single Workflow Execution can be executed. - **[Workflow Run Timeout](/encyclopedia/detecting-workflow-failures#workflow-run-timeout):** restricts the maximum amount of time that a single Workflow Run can last. - **[Workflow Task Timeout](/encyclopedia/detecting-workflow-failures#workflow-task-timeout):** restricts the maximum amount of time that a Worker can execute a Workflow Task. Set the timeout to either the [`start_workflow()`](https://python.temporal.io/temporalio.client.Client.html#start_workflow) or [`execute_workflow()`](https://python.temporal.io/temporalio.client.Client.html#execute_workflow) asynchronous methods. Available timeouts are: - `execution_timeout` - `run_timeout` - `task_timeout` View the source code {' '} in the context of the rest of the application code. ```python --- # ... result = await client.execute_workflow( YourWorkflow.run, "your timeout argument", id="your-workflow-id", task_queue="your-task-queue", # Set Workflow Timeout duration execution_timeout=timedelta(seconds=2), # run_timeout=timedelta(seconds=2), # task_timeout=timedelta(seconds=2), ) ``` ### Workflow retries {#workflow-retries} **How to set a Workflow Retry Policy using the Temporal Python SDK** A Retry Policy can work in cooperation with the timeouts to provide fine controls to optimize the execution experience. Use a [Retry Policy](/encyclopedia/retry-policies) to retry a Workflow Execution in the event of a failure. Workflow Executions do not retry by default, and Retry Policies should be used with Workflow Executions only in certain situations. Set the Retry Policy to either the [`start_workflow()`](https://python.temporal.io/temporalio.client.Client.html#start_workflow) or [`execute_workflow()`](https://python.temporal.io/temporalio.client.Client.html#execute_workflow) asynchronous methods. View the source code {' '} in the context of the rest of the application code. ```python --- # ... handle = await client.execute_workflow( YourWorkflow.run, "your retry policy argument", id="your-workflow-id", task_queue="your-task-queue", retry_policy=RetryPolicy(maximum_interval=timedelta(seconds=2)), ) ``` ## Set Activity timeouts {#activity-timeouts} **How to set an Activity Execution Timeout using the Temporal Python SDK** Each Activity timeout controls the maximum duration of a different aspect of an Activity Execution. The following timeouts are available in the Activity Options. - **[Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout):** is the maximum amount of time allowed for the overall [Activity Execution](/activity-execution). - **[Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout):** is the maximum time allowed for a single [Activity Task Execution](/tasks#activity-task-execution). - **[Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout):** is the maximum amount of time that is allowed from when an [Activity Task](/tasks#activity-task) is scheduled to when a [Worker](/workers#worker) starts that Activity Task. An Activity Execution must have either the Start-To-Close or the Schedule-To-Close Timeout set. Activity options are set as keyword arguments after the Activity arguments. Available timeouts are: - schedule_to_close_timeout - schedule_to_start_timeout - start_to_close_timeout View the source code {' '} in the context of the rest of the application code. ```python --- # ... activity_timeout_result = await workflow.execute_activity( your_activity, YourParams(greeting, "Activity Timeout option"), # Activity Execution Timeout start_to_close_timeout=timedelta(seconds=10), # schedule_to_start_timeout=timedelta(seconds=10), # schedule_to_close_timeout=timedelta(seconds=10), ) ``` ### Set an Activity Retry Policy {#activity-retries} **How to set an Activity Retry Policy using the Temporal Python SDK** A Retry Policy works in cooperation with the timeouts to provide fine controls to optimize the execution experience. Activity Executions are automatically associated with a default [Retry Policy](/encyclopedia/retry-policies) if a custom one is not provided. To create an Activity Retry Policy in Python, set the [RetryPolicy](https://python.temporal.io/temporalio.common.RetryPolicy.html) class within the [`start_activity()`](https://python.temporal.io/temporalio.workflow.html#start_activity) or [`execute_activity()`](https://python.temporal.io/temporalio.workflow.html#execute_activity) function. View the source code {' '} in the context of the rest of the application code. ```python from temporalio.common import RetryPolicy --- # ... activity_result = await workflow.execute_activity( your_activity, YourParams(greeting, "Retry Policy options"), start_to_close_timeout=timedelta(seconds=10), # Retry Policy retry_policy=RetryPolicy( backoff_coefficient=2.0, maximum_attempts=5, initial_interval=timedelta(seconds=1), maximum_interval=timedelta(seconds=2), # non_retryable_error_types=["ValueError"], ), ) ``` ### Override the retry interval with `next_retry_delay` {#next-retry-delay} To override the next retry interval set by the current policy, pass `next_retry_delay` when raising an [ApplicationError](/references/failures#application-failure) in an Activity. This value replaces and overrides whatever the retry interval would normally be on the retry policy. For example, you can set the delay interval based on an Activity's attempt count. In the following example, the retry delay starts at 3 seconds after the first attempt. It increases to 6 seconds for the second attempt, 9 seconds for the third attempt, and so forth. This creates a steadily increasing backoff, versus the exponential approach used by [backoff coefficients](/encyclopedia/retry-policies#backoff-coefficient): ```python from temporalio.exceptions import ApplicationError from datetime import timedelta @activity.defn async def my_activity(input: MyActivityInput): try: # Your activity logic goes here except Exception as e: attempt = activity.info().attempt raise ApplicationError( f"Error encountered on attempt {attempt}", next_retry_delay=timedelta(seconds=3 * attempt), ) from e ``` ## Heartbeat an Activity {#activity-heartbeats} **How to Heartbeat an Activity using the Temporal Python SDK** An [Activity Heartbeat](/encyclopedia/detecting-activity-failures#activity-heartbeat) is a ping from the [Worker Process](/workers#worker-process) that is executing the Activity to the [Temporal Service](/temporal-service). Each Heartbeat informs the Temporal Service that the [Activity Execution](/activity-execution) is making progress and the Worker has not crashed. If the Temporal Service does not receive a Heartbeat within a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) time period, the Activity will be considered failed and another [Activity Task Execution](/tasks#activity-task-execution) may be scheduled according to the Retry Policy. Heartbeats may not always be sent to the Temporal Service—they may be [throttled](/encyclopedia/detecting-activity-failures#throttling) by the Worker. Activity Cancellations are delivered to Activities from the Temporal Service when they Heartbeat. Activities that don't Heartbeat can't receive a Cancellation. Heartbeat throttling may lead to Cancellation getting delivered later than expected. Heartbeats can contain a `details` field describing the Activity's current progress. If an Activity gets retried, the Activity can access the `details` from the last Heartbeat that was sent to the Temporal Service. To Heartbeat an Activity Execution in Python, use the [`heartbeat()`](https://python.temporal.io/temporalio.activity.html#heartbeat) API. ```python @activity.defn async def your_activity_definition() -> str: activity.heartbeat("heartbeat details!") ``` In addition to obtaining cancellation information, Heartbeats also support detail data that persists on the server for retrieval during Activity retry. If an Activity calls `heartbeat(123, 456)` and then fails and is retried, `heartbeat_details` returns an iterable containing `123` and `456` on the next Run. #### Set a Heartbeat Timeout {#heartbeat-timeout} **How to set a Heartbeat Timeout using the Temporal Python SDK** A [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) works in conjunction with [Activity Heartbeats](/encyclopedia/detecting-activity-failures#activity-heartbeat). [`heartbeat_timeout`](https://python.temporal.io/temporalio.worker.StartActivityInput.html#heartbeat_timeout) is a class variable for the [`start_activity()`](https://python.temporal.io/temporalio.workflow.html#start_activity) function used to set the maximum time between Activity Heartbeats. ```python workflow.start_activity( activity="your-activity", schedule_to_close_timeout=timedelta(seconds=5), heartbeat_timeout=timedelta(seconds=1), ) ``` `execute_activity()` is a shortcut for [`start_activity()`](https://python.temporal.io/temporalio.workflow.html#start_activity) that waits on its result. To get just the handle to wait and cancel separately, use `start_activity()`. `execute_activity()` should be used in most cases unless advanced task capabilities are needed. ```python workflow.execute_activity( activity="your-activity", name, schedule_to_close_timeout=timedelta(seconds=5), heartbeat_timeout=timedelta(seconds=1), ) ``` --- ## Python SDK developer guide ![Python SDK Banner](/img/assets/banner-python-temporal.png) :::info PYTHON SPECIFIC RESOURCES Build Temporal Applications with the Python SDK. **Temporal Python Technical Resources:** - [Python SDK Quickstart - Setup Guide](https://docs.temporal.io/develop/python/set-up-your-local-python) - [Python API Documentation](https://python.temporal.io) - [Python SDK Code Samples](https://github.com/temporalio/samples-python) - [Python SDK Github](https://github.com/temporalio/sdk-python) - [Temporal 101 in Python Free Course](https://learn.temporal.io/courses/temporal_101/python/) **Get Connected with the Temporal Python Community:** - [Temporal Python Community Slack](https://app.slack.com/client/TNWA8QCGZ) - [Python SDK Forum](https://community.temporal.io/tag/python-sdk) ::: ## [Core Application](/develop/python/core-application) Use the essential components of a Temporal Application (Workflows, Activities, and Workers) to build and run a Temporal application. - [Develop a Basic Workflow](/develop/python/core-application#develop-workflows) - [Develop a Basic Activity](/develop/python/core-application#develop-activities) - [Start an Activity Execution](/develop/python/core-application#activity-execution) - [Run Worker Processes](/develop/python/core-application#run-a-dev-worker) ## [Temporal Client](/develop/python/temporal-client) Connect to a Temporal Service and start a Workflow Execution. - [Connect to Development Temporal Service](/develop/python/temporal-client#connect-to-development-service) - [Connect to Temporal Cloud](/develop/python/temporal-client#connect-to-temporal-cloud) - [Start a Workflow Execution](/develop/python/temporal-client#start-workflow-execution) ## [Python SDK Sandbox](/develop/python/python-sdk-sandbox) Use third-party Python modules without non-deterministic behavior. ## [Python SDK sync vs. async implementations](/develop/python/python-sdk-sync-vs-async) Implement synchronous or asynchronous Activities. ## [Testing](/develop/python/testing-suite) Set up the testing suite and test Workflows and Activities. - [Test Frameworks](/develop/python/testing-suite#test-frameworks) - [Testing Activities](/develop/python/testing-suite#test-activities) - [Testing Workflows](/develop/python/testing-suite#test-workflows) - [How to Replay a Workflow Execution](/develop/python/testing-suite#replay) ## [Failure detection](/develop/python/failure-detection) Explore how your application can detect failures using timeouts and automatically attempt to mitigate them with retries. - [Workflow Timeouts](/develop/python/failure-detection#workflow-timeouts) - [Set Activity Timeouts](/develop/python/failure-detection#activity-timeouts) - [Heartbeat an Activity](/develop/python/failure-detection#activity-heartbeats) ## [Workflow message passing](/develop/python/message-passing) Send messages to and read the state of Workflow Executions. - [Develop with Signals](/develop/python/message-passing#signals) - [Develop with Queries](/develop/python/message-passing#queries) - [Develop with Updates](/develop/python/message-passing#updates) - [What is a Dynamic Handler](/develop/python/message-passing#dynamic-handler) ## [Interrupt a Workflow feature guide](/develop/python/cancellation) Interrupt a Workflow Execution with a Cancel or Terminate action. - [Cancel a Workflow](/develop/python/cancellation#cancellation) - [Terminate a Workflow](/develop/python/cancellation#termination) - [Reset a Workflow](/develop/python/cancellation#reset): Resume a Workflow Execution from an earlier point in its Event History. - [Cancel an Activity from a Workflow](/develop/python/cancellation#cancel-activity) ## [Asynchronous Activity completion](/develop/python/asynchronous-activity-completion) Complete Activities asynchronously. - [Asynchronously Complete an Activity](/develop/python/asynchronous-activity-completion) ## [Versioning](/develop/python/versioning) Change Workflow Definitions without causing non-deterministic behavior in running Workflows. - [Introduction to Versioning](/develop/python/versioning) - [How to Use the Patching API](/develop/python/versioning#patching) ## [Observability](/develop/python/observability) Configure and use the Temporal Observability APIs. - [Emit Metrics](/develop/python/observability#metrics) - [Set up tracing](/develop/python/observability#tracing) - [Log from a Workflow](/develop/python/observability#logging) - [Use Visibility APIs](/develop/python/observability#visibility) ## [Debugging](/develop/python/debugging) Explore various ways to debug your application. - [Debugging](/develop/python/debugging) ## [Schedules](/develop/python/schedules) Run Workflows on a schedule and delay the start of a Workflow. - [Schedule a Workflow](/develop/python/schedules#schedule-a-workflow) - [Temporal Cron Jobs](/develop/python/schedules#temporal-cron-jobs) - [Start Delay](/develop/python/schedules#start-delay) ## [Data encryption](/develop/python/converters-and-encryption) Use compression, encryption, and other data handling by implementing custom converters and codecs. - [Custom Payload Codec](/develop/python/converters-and-encryption#custom-payload-codec) - [Payload Conversion](/develop/python/converters-and-encryption#payload-conversion) ## Temporal Nexus The [Temporal Nexus](/develop/python/nexus) feature guide shows how to use Temporal Nexus to connect durable executions within and across Namespaces using a Nexus Endpoint, a Nexus Service contract, and Nexus Operations. - [Create a Nexus Endpoint to route requests from caller to handler](/develop/python/nexus#create-nexus-endpoint) - [Define the Nexus Service contract](/develop/python/nexus#define-nexus-service-contract) - [Develop a Nexus Service and Operation handlers](/develop/python/nexus#develop-nexus-service-operation-handlers) - [Develop a caller Workflow that uses a Nexus Service](/develop/python/nexus#develop-caller-workflow-nexus-service) - [Make Nexus calls across Namespaces with a dev Server](/develop/python/nexus#register-the-caller-workflow-in-a-worker-and-start-the-caller-workflow) - [Make Nexus calls across Namespaces in Temporal Cloud](/develop/python/nexus#nexus-calls-across-namespaces-temporal-cloud) ## [Durable Timers](/develop/python/timers) Use Timers to make a Workflow Execution pause or "sleep" for seconds, minutes, days, months, or years. - [Sleep](/develop/python/timers) ## [Child Workflows](/develop/python/child-workflows) Explore how to spawn a Child Workflow Execution and handle Child Workflow Events. - [Start a Child Workflow Execution](/develop/python/child-workflows) ## [Continue-As-New](/develop/python/continue-as-new) Continue the Workflow Execution with a new Workflow Execution using the same Workflow ID. - [Continue-As-New](/develop/python/continue-as-new) ## [Interceptors](/develop/python/interceptors) Manage inbound and outbound SDK calls, enhance tracing, and add authorization to your Workflows and Activities. - [Interceptors](/develop/python/interceptors) ## [Enriching the User Interface](/develop/python/enriching-ui) Add descriptive information to workflows and events for better visibility and context in the UI. - [Adding Summary and Details to Workflows](/develop/python/enriching-ui#adding-summary-and-details-to-workflows) --- ## Braintrust Integration Temporal's integration with [Braintrust](https://braintrust.dev) gives you full observability into your AI agent Workflows—tracing every LLM call, managing prompts without code deploys, and tracking costs across models. When building AI agents with Temporal, you get durable execution: automatic retries, state persistence, and the ability to recover from failures mid-workflow. Braintrust adds the observability layer: see exactly what your agents are doing, iterate on prompts in a UI, and measure whether changes improve outcomes. The integration connects these capabilities with minimal code changes. Every Workflow and Activity becomes a span in Braintrust, and every LLM call is traced with inputs, outputs, tokens, and latency. :::info The Temporal Python SDK integration with Braintrust is currently in [Public Preview](/evaluate/development-production-features/release-stages#public-preview). Refer to the [Temporal product release stages guide](/evaluate/development-production-features/release-stages) for more information. ::: All code snippets in this guide are taken from the [deep research sample](https://github.com/braintrustdata/braintrust-cookbook/blob/main/examples/TemporalDeepResearch/TemporalDeepResearch.mdx). Refer to the sample for the complete code and run it locally. ## Prerequisites - This guide assumes you are already familiar with Braintrust. If you aren't, refer to the [Braintrust documentation](https://www.braintrust.dev/docs) for more details. - If you are new to Temporal, we recommend reading [Understanding Temporal](/evaluate/understanding-temporal) or taking the [Temporal 101](https://learn.temporal.io/courses/temporal_101/) course. - Ensure you have set up your local development environment by following the [Set up your local development environment](/develop/python/core-application) guide. When you're done, leave the Temporal Development Server running if you want to test your code locally. ## Configure Workers to use Braintrust Workers execute the code that defines your Workflows and Activities. To trace Workflow and Activity execution in Braintrust, add the `BraintrustPlugin` to your Worker. Follow the steps below to configure your Worker. 1. Install the Braintrust SDK with Temporal support. ```bash uv pip install "braintrust[temporal]" ``` 2. Initialize the Braintrust logger before creating your Worker. The logger must be initialized first so that spans are properly connected. ```python from braintrust import init_logger # Initialize BEFORE creating the Temporal client or worker init_logger(project=os.environ.get("BRAINTRUST_PROJECT", "my-project")) ``` 3. Add the `BraintrustPlugin` to your Worker. ```python from braintrust.contrib.temporal import BraintrustPlugin from temporalio.worker import Worker worker = Worker( client, task_queue="my-task-queue", workflows=[MyWorkflow], activities=[my_activity], plugins=[BraintrustPlugin()], # Add this line ) ``` 4. Add the plugin to your Temporal Client as well. This enables span context propagation, linking client code to the Workflows it starts. ```python from temporalio.client import Client from braintrust.contrib.temporal import BraintrustPlugin client = await Client.connect( "localhost:7233", plugins=[BraintrustPlugin()], ) ``` 5. Run the Worker. Ensure the Worker process has access to your Braintrust API key via the `BRAINTRUST_API_KEY` environment variable. ```bash export BRAINTRUST_API_KEY="your-api-key" python worker.py ``` :::tip You only need to provide API credentials to the Worker process. The client application that starts Workflow Executions doesn't need the Braintrust API key. ::: ## Trace LLM calls with wrap_openai The simplest way to trace LLM calls is to wrap your OpenAI client. Every call through the wrapped client automatically creates a span in Braintrust with inputs, outputs, token counts, and latency. ```python from braintrust import wrap_openai from openai import AsyncOpenAI --- # max_retries=0 because Temporal handles retries client = wrap_openai(AsyncOpenAI(max_retries=0)) ``` Use this client in your Activities: ```python from temporalio import activity @activity.defn async def invoke_model(prompt: str) -> str: client = wrap_openai(AsyncOpenAI(max_retries=0)) response = await client.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt}, ], ) return response.choices[0].message.content ``` After running a Workflow, you'll see a trace hierarchy in Braintrust: ``` my-workflow-request (client span) └── temporal.workflow.MyWorkflow └── temporal.activity.invoke_model └── Chat Completion (gpt-4o) ``` ## Add custom spans for application context Add your own spans to capture business-level context like user queries, workflow inputs, and final outputs. ```python from braintrust import start_span async def run_research(query: str): with start_span(name="research-request", type="task") as span: span.log(input={"query": query}) result = await client.execute_workflow( ResearchWorkflow.run, query, id=f"research-{uuid.uuid4()}", task_queue="research-task-queue", ) span.log(output={"result": result}) return result ``` ## Manage prompts with load_prompt Braintrust lets you manage prompts in a UI and deploy changes without code deploys. The workflow is: 1. **Develop** prompts in code, see results in Braintrust traces 2. **Create** a prompt in the Braintrust UI from your best version 3. **Evaluate** different versions using Braintrust's eval tools 4. **Deploy** by pointing your code at the Braintrust prompt 5. **Iterate** in the UI—changes go live without code deploys To load a prompt from Braintrust in your Activity: ```python from temporalio import activity @activity.defn async def invoke_model(prompt_slug: str, user_input: str) -> str: # Load prompt from Braintrust prompt = braintrust.load_prompt( project=os.environ.get("BRAINTRUST_PROJECT", "my-project"), slug=prompt_slug, ) # Build returns the full prompt configuration built = prompt.build() # Extract system message system_content = None for msg in built.get("messages", []): if msg.get("role") == "system": system_content = msg["content"] break client = wrap_openai(AsyncOpenAI(max_retries=0)) response = await client.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": system_content}, {"role": "user", "content": user_input}, ], ) return response.choices[0].message.content ``` :::tip Provide a fallback prompt in your code for resilience. If Braintrust is unavailable, your Workflow continues with the hardcoded prompt. ```python DEFAULT_SYSTEM_PROMPT = "You are a helpful assistant." try: prompt = braintrust.load_prompt(project="my-project", slug="my-prompt") system_content = extract_system_message(prompt.build()) except Exception as e: activity.logger.warning(f"Failed to load prompt: {e}. Using fallback.") system_content = DEFAULT_SYSTEM_PROMPT ``` ::: ## Example: Deep Research Agent The [deep research sample](https://github.com/braintrustdata/braintrust-cookbook/blob/main/examples/TemporalDeepResearch/TemporalDeepResearch.mdx) demonstrates a complete AI agent that: - Plans research strategies - Generates search queries - Executes web searches in parallel - Synthesizes findings into comprehensive reports The sample shows all integration patterns: wrapped OpenAI client, BraintrustPlugin on Worker and Client, custom spans, and prompt management with `load_prompt()`. To run the sample: ```bash --- # Terminal 1: Start Temporal temporal server start-dev --- # Terminal 2: Start the worker export BRAINTRUST_API_KEY="your-api-key" export OPENAI_API_KEY="your-api-key" export BRAINTRUST_PROJECT="deep-research" uv run python -m worker --- # Terminal 3: Run a research query uv run python -m start_workflow "What are the latest advances in quantum computing?" ``` --- ## AI Integrations Temporal Python SDK provides integrations with the following tools and services: - [Braintrust](./braintrust.mdx) --- ## Interceptors - Python SDK The behavior of the Python SDK can be customized in many useful ways by modifying inbound and outbound calls using Interceptors. This is similar to the use of "middleware" in web frameworks such as [Django](https://docs.djangoproject.com/en/5.2/topics/http/middleware/), [Starlette](https://www.starlette.io/middleware/), and [Flask](https://flask.palletsprojects.com/en/stable/lifecycle/#middleware). The methods you implement on your Interceptor classes can perform arbitrary side effects. Interceptors can also perform arbitrary modifications to incoming and outgoing data before it is received by the SDK's "real" implementation. There are five categories of inbound and outbound calls that you can modify in this way: #### [Outbound Client calls](https://python.temporal.io/temporalio.client.OutboundInterceptor.html) - `start_workflow()` - `signal_workflow()` - `list_workflows()` - `update_schedule()` This is not an exhaustive list; refer to the [Python SDK methods](https://python.temporal.io/temporalio.client.OutboundInterceptor.html) for more. #### [Inbound Workflow calls](https://python.temporal.io/temporalio.worker.WorkflowInboundInterceptor.html) - `execute_workflow()` (i.e. handle a Workflow Task that is starting a new Workflow Execution) - `handle_query()` - `handle_signal()` - `handle_update_handler()` - `handle_update_validator()` #### [Outbound Workflow calls](https://python.temporal.io/temporalio.worker.WorkflowOutboundInterceptor.html) - `start_activity()` - `start_child_workflow()` - `signal_child_workflow()` - `signal_external_workflow()` - `start_nexus_operation()` - `start_local_activity()` #### [Inbound Activity calls](https://python.temporal.io/temporalio.worker.ActivityInboundInterceptor.html) - `execute_activity()` - i.e. handle a task to execute an Activity (this is the only Inbound Activity call) #### [Outbound Activity calls](https://python.temporal.io/temporalio.worker.ActivityOutboundInterceptor.html) - `info()` - `heartbeat()` The first of these categories is a Client call, and the remaining 4 are Worker calls. ## Client call Interceptors To modify outbound Client calls, define a class inheriting from [`client.Interceptor`](https://python.temporal.io/temporalio.client.Interceptor.html), and implement the method `intercept_client()` to return an instance of [`OutboundInterceptor`](https://python.temporal.io/temporalio.client.OutboundInterceptor.html) that implements the subset of outbound Client calls that you wish to modify. This example implements an Interceptor on outbound Client calls that sets a certain key in the outbound `headers` field. A User ID is context-propagated by being sent in a header field with outbound requests: ```python class ContextPropagationInterceptor( temporalio.client.Interceptor, temporalio.worker.Interceptor ): def __init__( self, payload_converter: temporalio.converter.PayloadConverter = temporalio.converter.default().payload_converter, ) -> None: self._payload_converter = payload_converter def intercept_client( self, next: temporalio.client.OutboundInterceptor ) -> temporalio.client.OutboundInterceptor: return _ContextPropagationClientOutboundInterceptor( next, self._payload_converter ) def set_header_from_context( input: _InputWithHeaders, payload_converter: temporalio.converter.PayloadConverter ) -> None: user_id_val = user_id.get() if user_id_val: input.headers = { **input.headers, HEADER_KEY: payload_converter.to_payload(user_id_val), } class _ContextPropagationClientOutboundInterceptor( temporalio.client.OutboundInterceptor ): def __init__( self, next: temporalio.client.OutboundInterceptor, payload_converter: temporalio.converter.PayloadConverter, ) -> None: super().__init__(next) self._payload_converter = payload_converter async def start_workflow( self, input: temporalio.client.StartWorkflowInput ) -> temporalio.client.WorkflowHandle[Any, Any]: set_header_from_context(input, self._payload_converter) return await super().start_workflow(input) ``` It often happens that your Worker and Client interceptors will share code because they implement closely related logic. In the Python SDK, you will typically want to create an interceptor class that inherits from _both_ `client.Interceptor` and `worker.Interceptor` as above, since their method sets do not overlap. You can then pass this in the `interceptors` argument of `Client.connect()` in your client/starter code: ```python client = await Client.connect( "localhost:7233", interceptors=[interceptor.ContextPropagationInterceptor()], ) ``` The `interceptors` list can contain multiple interceptors. In this case they form a chain: a method implemented on an interceptor instance in the list can perform side effects, and modify the data, before passing it on to the corresponding method on the next interceptor in the list. Your interceptor classes need not implement every method; the default implementation is always to pass the data on to the next method in the interceptor chain. During execution, when the SDK encounters an Inbound Activity call, it will look to the first Interceptor instance, get hold of the appropriate intercepted method, and call it. The intercepted method will perform its function then call the same method on the next Interceptor in the chain. At the end of the chain the SDK will call the "real" SDK method. ## Worker call Interceptors To modify inbound and outbound Workflow and Activity calls, define a class inheriting from `worker.Interceptor`. This is an interface with two methods named `intercept_activity` and `workflow_interceptor_class`, which you can use to configure interceptions of Activity and Workflow calls, respectively. `intercept_activity` returns an `ActivityInboundInterceptor`. This example demonstrates using an interceptor to measure [Schedule-To-Start](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout) latency: ```python from temporalio.worker import ( ActivityInboundInterceptor, ExecuteActivityInput, Interceptor, Worker, ) class SimpleWorkerInterceptor(Interceptor): def intercept_activity( self, next: ActivityInboundInterceptor ) -> ActivityInboundInterceptor: return CustomScheduleToStartInterceptor(next) class CustomScheduleToStartInterceptor(ActivityInboundInterceptor): async def execute_activity(self, input: ExecuteActivityInput): schedule_to_start = ( activity.info().started_time - activity.info().current_attempt_scheduled_time ) meter = activity.metric_meter() histogram = meter.create_histogram_timedelta( "custom_activity_schedule_to_start_latency", description="Time between activity scheduling and start", unit="duration", ) histogram.record( schedule_to_start, {"workflow_type": activity.info().workflow_type} ) return await self.next.execute_activity(input) client = await Client.connect( "localhost:7233", ) worker = Worker( client, interceptors=[SimpleWorkerInterceptor()], # ... ) ``` :::note If you are inheriting methods from both `client.Interceptor` and `worker.Interceptor`, you should not pass your Interceptors directly to the `Worker()` constructor — instead, pass it to `Client.connect()`. This will allow a Worker to use them from the underlying Client. In other words, only pass the Interceptors to the `Worker()` if you are not using Client methods. ::: The `workflow_interceptor_class` returns a `WorkflowInboundInterceptor` that works similarly to `ActivityInboundInterceptor`. --- ## Workflow message passing - Python SDK A Workflow can act like a stateful web service that receives messages: Queries, Signals, and Updates. The Workflow implementation defines these endpoints via handler methods that can react to incoming messages and return values. Temporal Clients use messages to read Workflow state and control its execution. See [Workflow message passing](/encyclopedia/workflow-message-passing) for a general overview of this topic. This page introduces these features for the Temporal Python SDK. ## Write message handlers {#writing-message-handlers} :::info The code that follows is part of a working message passing [sample](https://github.com/temporalio/samples-python/tree/main/message_passing/introduction). ::: Follow these guidelines when writing your message handlers: - Message handlers are defined as methods on the Workflow class, using one of the three decorators: [`@workflow.query`](https://python.temporal.io/temporalio.workflow.html#query), [`@workflow.signal`](https://python.temporal.io/temporalio.workflow.html#signal), and [`@workflow.update`](https://python.temporal.io/temporalio.workflow.html#update). - The parameters and return values of handlers and the main Workflow function must be [serializable](/dataconversion). - Prefer [data classes](https://docs.python.org/3/library/dataclasses.html) to multiple input parameters. Data class parameters allow you to add fields without changing the calling signature. ### Query handlers {#queries} A [Query](/sending-messages#sending-queries) is a synchronous operation that retrieves state from a Workflow Execution: ```python class Language(IntEnum): Chinese = 1 English = 2 French = 3 @dataclass class GetLanguagesInput: include_unsupported: bool @workflow.defn class GreetingWorkflow: def __init__(self) -> None: self.greetings = { Language.CHINESE: "你好,世界", Language.ENGLISH: "Hello, world", } @workflow.query def get_languages(self, input: GetLanguagesInput) -> list[Language]: # 👉 A Query handler returns a value: it can inspect but must not mutate the Workflow state. if input.include_unsupported: return list(Language) else: return list(self.greetings) ``` - The Query decorator can accept arguments. Refer to the API docs: [`@workflow.query`](https://python.temporal.io/temporalio.workflow.html#query). - A Query handler uses `def`, not `async def`. You can't perform async operations like executing an Activity in a Query handler. ### Signal handlers {#signals} A [Signal](/sending-messages#sending-signals) is an asynchronous message sent to a running Workflow Execution to change its state and control its flow: ```python @dataclass class ApproveInput: name: str @workflow.defn class GreetingWorkflow: ... @workflow.signal def approve(self, input: ApproveInput) -> None: # 👉 A Signal handler mutates the Workflow state but cannot return a value. self.approved_for_release = True self.approver_name = input.name ``` - The Signal decorator can accept arguments. Refer to the API docs: [`@workflow.signal`](https://python.temporal.io/temporalio.workflow.html#signal). - The handler should not return a value. The response is sent immediately from the server, without waiting for the Workflow to process the Signal. - Signal (and Update) handlers can be `async def`. This allows you to use Activities, Child Workflows, durable [`asyncio.sleep`](https://docs.python.org/3/library/asyncio-task.html#asyncio.sleep) Timers, [`workflow.wait_condition`](https://python.temporal.io/temporalio.workflow.html#wait_condition) conditions, and more. See [Async handlers](#async-handlers) and [Workflow message passing](/encyclopedia/workflow-message-passing) for guidelines on safely using async Signal and Update handlers. ### Update handlers and validators {#updates} An [Update](/sending-messages#sending-updates) is a trackable synchronous request sent to a running Workflow Execution. It can change the Workflow state, control its flow, and return a result. The sender must wait until the Worker accepts or rejects the Update. The sender may wait further to receive a returned value or an exception if something goes wrong: ```python class Language(IntEnum): Chinese = 1 English = 2 French = 3 @workflow.defn class GreetingWorkflow: ... @workflow.update def set_language(self, language: Language) -> Language: # 👉 An Update handler can mutate the Workflow state and return a value. previous_language, self.language = self.language, language return previous_language @set_language.validator def validate_language(self, language: Language) -> None: if language not in self.greetings: # 👉 In an Update validator you raise any exception to reject the Update. raise ValueError(f"{language.name} is not supported") ``` - The Update decorator can take arguments (like, `name`, `dynamic` and `unfinished_policy`) as described in the API reference docs for [`workflow.update`](https://python.temporal.io/temporalio.workflow.html#update). - About validators: - Use validators to reject an Update before it is written to History. Validators are always optional. If you don't need to reject Updates, you can skip them. - The SDK automatically provides a validator decorator named `@.validator`. The validator must accept the same argument types as the handler and return `None`. - Accepting and rejecting Updates with validators: - To reject an Update, raise an exception of any type in the validator. - Without a validator, Updates are always accepted. - Validators and Event History: - The `WorkflowExecutionUpdateAccepted` event is written into the History whether the acceptance was automatic or programmatic. - When a Validator raises an error, the Update is rejected and `WorkflowExecutionUpdateAccepted` _won't_ be added to the Event History. The caller receives an "Update failed" error. - Use [`workflow.current_update_info`](https://python.temporal.io/temporalio.workflow.html#current_update_info) to obtain information about the current Update. This includes the Update ID, which can be useful for deduplication when using Continue-As-New: see [Ensuring your messages are processed exactly once](/handling-messages#exactly-once-message-processing). - Update (and Signal) handlers can be `async def`, letting them use Activities, Child Workflows, durable [`asyncio.sleep`](https://docs.python.org/3/library/asyncio-task.html#asyncio.sleep) Timers, [`workflow.wait_condition`](https://python.temporal.io/temporalio.workflow.html#wait_condition) conditions, and more. See [Async handlers](#async-handlers) and [Workflow message passing](/encyclopedia/workflow-message-passing) for safe usage guidelines. ## Send messages {#send-messages} To send Queries, Signals, or Updates, you call methods on a [WorkflowHandle](https://python.temporal.io/temporalio.client.WorkflowHandle.html) object: - Use [start_workflow](https://python.temporal.io/temporalio.client.Client.html#start_workflow) to start a Workflow and return its handle. - Use [get_workflow_handle_for](https://python.temporal.io/temporalio.client.Client.html#get_workflow_handle_for) to retrieve a typed Workflow handle by its Workflow Id. For example: ```python client = await Client.connect("localhost:7233") workflow_handle = await client.start_workflow( GreetingWorkflow.run, id="greeting-workflow-1234", task_queue="my-task-queue" ) ``` To check the argument types required when sending messages -- and the return type for Queries and Updates -- refer to the corresponding handler method in the Workflow Definition. :::warning Using Continue-as-New and Updates - Temporal _does not_ support Continue-as-New functionality within Update handlers. - Complete all handlers _before_ using Continue-as-New. - Use Continue-as-New from your main Workflow Definition method, just as you would complete or fail a Workflow Execution. ::: ### Send a Query {#send-query} Use [`WorkflowHandle.query`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#query) to send a Query to a Workflow Execution: ```python supported_languages = await workflow_handle.query( GreetingWorkflow.get_languages, GetLanguagesInput(supported_only=True) ) ``` - Sending a Query doesn’t add events to a Workflow's Event History. - You can send Queries to closed Workflow Executions within a Namespace's Workflow retention period. This includes Workflows that have completed, failed, or timed out. Querying terminated Workflows is not safe and, therefore, not supported. - A Worker must be online and polling the Task Queue to process a Query. ### Send a Signal {#send-signal} You can send a Signal to a Workflow Execution from a Temporal Client or from another Workflow Execution. However, you can only send Signals to Workflow Executions that haven’t closed. #### Send a Signal from a Client {#send-signal-from-client} Use [`WorkflowHandle.signal`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#signal) to send a Signal: ```python await workflow_handle.signal(GreetingWorkflow.approve, ApproveInput(name="me")) ``` - The call returns when the server accepts the Signal; it does _not_ wait for the Signal to be delivered to the Workflow Execution. - The [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the Workflow's Event History. #### Send a Signal from a Workflow {#send-signal-from-workflow} A Workflow can send a Signal to another Workflow, known as an _External Signal_. You'll need a Workflow handle for the external Workflow. Use [`get_external_workflow_handle_for`](https://python.temporal.io/temporalio.workflow.html#get_external_workflow_handle_for): See full sample ```python --- # ... @workflow.defn class WorkflowB: @workflow.run async def run(self) -> None: handle = workflow.get_external_workflow_handle_for(WorkflowA.run, "workflow-a") await handle.signal(WorkflowA.your_signal, "signal argument") ``` When an External Signal is sent: - A [SignalExternalWorkflowExecutionInitiated](/references/events#signalexternalworkflowexecutioninitiated) Event appears in the sender's Event History. - A [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the recipient's Event History. #### Signal-With-Start {#signal-with-start} Signal-With-Start allows a Client to send a Signal to a Workflow Execution, starting the Execution if it is not already running. To use Signal-With-Start, call the [`start_workflow`](https://python.temporal.io/temporalio.client.Client.html#start_workflow) method and pass the `start_signal` argument with the name of your Signal: See full sample ```python from temporalio.client import Client --- # ... async def main(): client = await Client.connect("localhost:7233") await client.start_workflow( GreetingWorkflow.run, id="your-signal-with-start-workflow", task_queue="signal-tq", start_signal="submit_greeting", start_signal_args=["User Signal with Start"], ) ``` ### Send an Update {#send-update-from-client} An Update is a synchronous, blocking call that can change Workflow state, control its flow, and return a result. A client sending an Update must wait until the Server delivers the Update to a Worker. Workers must be available and responsive. If you need a response as soon as the Server receives the request, use a Signal instead. Also note that you can't send Updates to other Workflow Executions. - `WorkflowExecutionUpdateAccepted` is added to the Event History when the Worker confirms that the Update passed validation. - `WorkflowExecutionUpdateCompleted` is added to the Event History when the Worker confirms that the Update has finished. To send an Update to a Workflow Execution, you can: - Call [`execute_update`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#execute_update) and wait for the Update to complete. This code fetches an Update result: ```python previous_language = await workflow_handle.execute_update( GreetingWorkflow.set_language, Language.Chinese ) ``` - Send [`start_update`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#start_update) to receive an [`UpdateHandle`](https://python.temporal.io/temporalio.client.WorkflowUpdateHandle.html) as soon as the Update is accepted. - Use this `UpdateHandle` later to fetch your results. - `async def` Update handlers normally perform long-running asynchronous operations, such as executing an Activity. - `start_update` only waits until the Worker has accepted or rejected the Update, not until all asynchronous operations are complete. For example: ```python # Wait until the update is accepted update_handle = await workflow_handle.start_update( HelloWorldWorkflow.set_greeting, HelloWorldInput("World"), wait_for_stage=client.WorkflowUpdateStage.ACCEPTED, ) # Wait until the update is completed update_result = await update_handle.result() ``` For more details, see the "Async handlers" section. To obtain an Update handle, you can: - Use [`start_update`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#start_update) to start an Update and return the handle, as shown in the preceding example. - Use [`get_update_handle_for`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#get_update_handle_for) to fetch a handle for an in-progress Update using the Update ID. #### Update-With-Start {#update-with-start} :::tip For open source server users, Temporal Server version [Temporal Server version 1.28](https://github.com/temporalio/temporal/releases/tag/v1.28.0) is recommended. ::: [Update-with-Start](/sending-messages#update-with-start) lets you [send an Update](/develop/python/message-passing#send-update-from-client) that checks whether an already-running Workflow with that ID exists: - If the Workflow exists, the Update is processed. - If the Workflow does not exist, a new Workflow Execution is started with the given ID, and the Update is processed before the main Workflow method starts to execute. Use [`execute_update_with_start_workflow`](https://python.temporal.io/temporalio.client.Client.html#start_update_with_start_workflow) to start the Update and wait for the result in one go. Alternatively, use [`start_update_with_start_workflow`](https://python.temporal.io/temporalio.client.Client.html#start_update_with_start_workflow) to start the Update and receive a [`WorkflowUpdateHandle`](https://python.temporal.io/temporalio.client.WorkflowUpdateHandle.html), and then use `await update_handle.result()` to retrieve the result from the Update. These calls return once the requested Update wait stage has been reached, or when the request times out. You will need to provide a [`WithStartWorkflowOperation`](https://python.temporal.io/temporalio.client.WithStartWorkflowOperation.html) to define the Workflow that will be started if necessary, and its arguments. You must specify a [WorkflowIdConflictPolicy](/workflow-execution/workflowid-runid#workflow-id-conflict-policy) when creating the `WithStartWorkflowOperation`. Note that a `WithStartWorkflowOperation` can only be used once. Here's an example taken from the [lazy_initialization](https://github.com/temporalio/samples-python/blob/main/message_passing/update_with_start/lazy_initialization/starter.py) sample: ```python start_op = WithStartWorkflowOperation( ShoppingCartWorkflow.run, id=cart_id, id_conflict_policy=common.WorkflowIDConflictPolicy.USE_EXISTING, task_queue="my-task-queue", ) try: price = Decimal( await temporal_client.execute_update_with_start_workflow( ShoppingCartWorkflow.add_item, ShoppingCartItem(sku=item_id, quantity=quantity), start_workflow_operation=start_op, ) ) except WorkflowUpdateFailedError: price = None workflow_handle = await start_op.workflow_handle() return price, workflow_handle ``` :::info SEND MESSAGES WITHOUT TYPE SAFETY In real-world development, sometimes you may be unable to import Workflow Definition method signatures. When you don't have access to the Workflow Definition or it isn't written in Python, you can still use APIs that aren't type-safe, and dynamic method invocation. Pass method names instead of method objects to: - [`Client.start_workflow`](https://python.temporal.io/temporalio.client.Client.html#start_workflow) - [`WorkflowHandle.query`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#query) - [`WorkflowHandle.signal`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#signal) - [`WorkflowHandle.execute_update`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#execute_update) - [`WorkflowHandle.start_update`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#start_update) Use these non-type safe APIs: - [`get_workflow_handle`](https://python.temporal.io/temporalio.client.Client.html#get_workflow_handle) - [`get_external_workflow_handle`](https://python.temporal.io/temporalio.workflow.html#get_external_workflow_handle). ::: ## Message handler patterns {#message-handler-patterns} This section covers common write operations, such as Signal and Update handlers. It doesn't apply to pure read operations, like Queries or Update Validators. :::tip For additional information, see [Inject work into the main Workflow](/handling-messages#injecting-work-into-main-workflow), [Ensuring your messages are processed exactly once](/handling-messages#exactly-once-message-processing), and [this sample](https://github.com/temporalio/samples-python/blob/message-passing/message_passing/safe_message_handlers/README.md) demonstrating safe `async` message handling. ::: ### Use async handlers {#async-handlers} Signal and Update handlers can be `async def` as well as `def`. Using `async def` allows you to use `await` with Activities, Child Workflows, [`asyncio.sleep`](https://docs.python.org/3/library/asyncio-task.html#asyncio.sleep) Timers, [`workflow.wait_condition`](https://python.temporal.io/temporalio.workflow.html#wait_condition) conditions, etc. This expands the possibilities for what can be done by a handler but it also means that handler executions and your main Workflow method are all running concurrently, with switching occurring between them at `await` calls. It's essential to understand the things that could go wrong in order to use `async def` handlers safely. See [Workflow message passing](/encyclopedia/workflow-message-passing) for guidance on safe usage of async Signal and Update handlers, the [Safe message handlers](https://github.com/temporalio/samples-python/tree/main/message_passing/safe_message_handlers) sample, and the [Controlling handler concurrency](#control-handler-concurrency) and [Waiting for message handlers to finish](#wait-for-message-handlers) sections below. The following code executes an Activity that makes a network call to a remote service. It modifies the Update handler from earlier on this page, turning it into an `async def`: ```python @activity.defn async def call_greeting_service(to_language: Language) -> Optional[str]: await asyncio.sleep(0.2) # Pretend that we are calling a remote service. greetings = { Language.Arabic: "مرحبا بالعالم", Language.Chinese: "你好,世界", Language.English: "Hello, world", Language.French: "Bonjour, monde", Language.Hindi: "नमस्ते दुनिया", Language.Spanish: "Hola mundo", } return greetings.get(to_language) @workflow.defn class GreetingWorkflow: def __init__(self) -> None: self.lock = asyncio.Lock() ... ... @workflow.update async def set_language(self, language: Language) -> Language: if language not in self.greetings: # 👉 Use a lock here to ensure that multiple calls to set_language are processed in order. async with self.lock: greeting = await workflow.execute_activity( call_greeting_service, language, start_to_close_timeout=timedelta(seconds=10), ) if greeting is None: # 👉 An update validator cannot be async, so cannot be used to check that the remote # call_greeting_service supports the requested language. Raising ApplicationError # will fail the Update, but the WorkflowExecutionUpdateAccepted event will still be # added to history. raise ApplicationError( f"Greeting service does not support {language.name}" ) self.greetings[language] = greeting previous_language, self.language = self.language, language return previous_language ``` After updating the code to use an `async def`, your Update handler can schedule an Activity and await the result. Although an `async def` Signal handler can also execute an Activity, using an Update handler allows the Client to receive a result or error once the Activity completes. This lets your client track the progress of asynchronous work performed by the Update's Activities, Child Workflows, etc. ### Add wait conditions to block Sometimes, `async def` Signal or Update handlers need to meet certain conditions before they should continue. You can use [`workflow.wait_condition`](https://python.temporal.io/temporalio.workflow.html#wait_condition) to prevent the code from proceeding until a condition is true. You specify the condition by passing a function that returns `True` or `False`, and you can optionally set a timeout. This is an important feature that helps you control your handler logic. Here are three important use cases for `workflow.wait_condition`: - Wait for a Signal or Update to arrive. - Wait in a handler until it's appropriate to continue. - Wait in the main Workflow until all active handlers have finished. #### Wait for a Signal or Update to arrive It's common to use `workflow.condition` to wait for a particular Signal or Update to be sent by a Client: ```python @workflow.defn class GreetingWorkflow: def __init__(self) -> None: self.approved_for_release = False self.approver_name: Optional[str] = None @workflow.signal def approve(self, input: ApproveInput) -> None: self.approved_for_release = True self.approver_name = input.name @workflow.run async def run(self) -> str: await workflow.wait_condition(lambda: self.approved_for_release) ... return self.greetings[self.language] ``` #### Use wait conditions in handlers {#wait-in-message-handler} It's common to use a Workflow wait condition to wait until a handler should start. You can also use wait conditions anywhere else in the handler to wait for a specific condition to become `True`. This allows you to write handlers that pause at multiple points, each time waiting for a required condition to become `True`. Consider a `ready_for_update_to_execute` method that runs before your Update handler executes. The `workflow.wait_condition` method waits until your condition is met: ```python @workflow.update async def my_update(self, update_input: UpdateInput) -> str: await workflow.wait_condition( lambda: self.ready_for_update_to_execute(update_input) ) ``` You can also use wait conditions anywhere else in the handler to wait for a specific condition to become true. This allows you to write handlers that pause at multiple points, each time waiting for a required condition to become true. #### Ensure your handlers finish before the Workflow completes {#wait-for-message-handlers} Workflow wait conditions can ensure your handler completes before a Workflow finishes. When your Workflow uses `async def` Signal or Update handlers, your main Workflow method can return or continue-as-new while a handler is still waiting on an async task, such as an Activity result. The Workflow completing may interrupt the handler before it finishes crucial work and cause client errors when trying retrieve Update results. Use [`workflow.wait_condition`](https://python.temporal.io/temporalio.workflow.html#wait_condition) and [`all_handlers_finished`](https://python.temporal.io/temporalio.workflow.html#all_handlers_finished) to address this problem and allow your Workflow to end smoothly: ```python @workflow.defn class MyWorkflow: @workflow.run async def run(self) -> str: ... await workflow.wait_condition(workflow.all_handlers_finished) return "workflow-result" ``` By default, your Worker will log a warning when you allow a Workflow Execution to finish with unfinished handler executions. You can silence these warnings on a per-handler basis by passing the `unfinished_policy` argument to the [`@workflow.signal`](https://python.temporal.io/temporalio.workflow.html#signal) / [`workflow.update`](https://python.temporal.io/temporalio.workflow.html#update) decorator: ```python @workflow.update(unfinished_policy=workflow.HandlerUnfinishedPolicy.ABANDON) async def my_update(self) -> None: ... ``` See [Finishing handlers before the Workflow completes](/handling-messages#finishing-message-handlers) for more information. ### Use `@workflow.init` to operate on Workflow input before any handler executes Normally, your Workflow `__init__` method won't have any parameters. However, if you use the `@workflow.init` decorator on your `__init__` method, you can give it the same [Workflow parameters](/develop/python/core-application#workflow-parameters) as your `@workflow.run` method. The SDK will then ensure that your `__init__` method receives the Workflow input arguments that the [Client sent](/develop/python/temporal-client#start-workflow-execution). (The Workflow input arguments are also passed to your `@workflow.run` method -- that always happens, whether or not you use the `@workflow.init` decorator.) This is useful if you have message handlers that need access to workflow input: see [Initializing the Workflow first](/handling-messages#workflow-initializers). Here's an example. Notice that `__init__` and `get_greeting` must have the same parameters, with the same type annotations: ```python @dataclass class MyWorkflowInput: name: str @workflow.defn class WorkflowRunSeesWorkflowInitWorkflow: @workflow.init def __init__(self, workflow_input: MyWorkflowInput) -> None: self.name_with_title = f"Sir {workflow_input.name}" self.title_has_been_checked = False @workflow.run async def get_greeting(self, workflow_input: MyWorkflowInput) -> str: await workflow.wait_condition(lambda: self.title_has_been_checked) return f"Hello, {self.name_with_title}" @workflow.update async def check_title_validity(self) -> bool: # 👉 The handler is now guaranteed to see the workflow input # after it has been processed by __init__. is_valid = await workflow.execute_activity( check_title_validity, self.name_with_title, schedule_to_close_timeout=timedelta(seconds=10), ) self.title_has_been_checked = True return is_valid ``` ### Use `asyncio.Lock` to prevent concurrent handler execution {#control-handler-concurrency} Concurrent processes can interact in unpredictable ways. Incorrectly written [concurrent message-passing](/handling-messages#message-handler-concurrency) code may not work correctly when multiple handler instances run simultaneously. Here's an example of a pathological case: ```python @workflow.defn class MyWorkflow: @workflow.signal async def bad_async_handler(self): data = await workflow.execute_activity( fetch_data, start_to_close_timeout=timedelta(seconds=10) ) self.x = data.x # 🐛🐛 Bug!! If multiple instances of this handler are executing concurrently, then # there may be times when the Workflow has self.x from one Activity execution and self.y from another. await asyncio.sleep(1) # or await anything else self.y = data.y ``` Coordinating access using `asyncio.Lock` corrects this code. Locking makes sure that only one handler instance can execute a specific section of code at any given time: ```python @workflow.defn class MyWorkflow: def __init__(self) -> None: ... self.lock = asyncio.Lock() ... @workflow.signal async def safe_async_handler(self): async with self.lock: data = await workflow.execute_activity( fetch_data, start_to_close_timeout=timedelta(seconds=10) ) self.x = data.x # ✅ OK: the scheduler may switch now to a different handler execution, or to the main workflow # method, but no other execution of this handler can run until this execution finishes. await asyncio.sleep(1) # or await anything else self.y = data.y ``` ## Message handler troubleshooting {#message-handler-troubleshooting} When sending a Signal, Update, or Query to a Workflow, your Client might encounter the following errors: - **The client can't contact the server**: You'll receive a [`temporalio.service.RPCError`](https://python.temporal.io/temporalio.service.RPCError.html) on which the `status` attribute is [`RPCStatusCode`](https://python.temporal.io/temporalio.service.RPCStatusCode.html) `UNAVAILABLE` (after some retries; see the `retry_config` argument to [`Client.connect`](https://python.temporal.io/temporalio.client.Client.html#connect)). - **The workflow does not exist**: You'll receive an [`temporalio.service.RPCError`](https://python.temporal.io/temporalio.service.RPCError.html) exception on which the `status` attribute is [`RPCStatusCode`](https://python.temporal.io/temporalio.service.RPCStatusCode.html) `NOT_FOUND`. See [Exceptions in message handlers](/handling-messages#exceptions) for a non–Python-specific discussion of this topic. ### Problems when sending a Signal {#signal-problems} When using Signal, the only exceptions that will result from your requests during its execution are the `RPCError`s described above. For Queries and Updates, the Client waits for a response from the Worker, and therefore additional errors may occur during the handler Execution by the Worker. ### Problems when sending an Update {#update-problems} When working with Updates, in addition to the `RPCError`s described above, you may encounter these errors: - **No Workflow Workers are polling the Task Queue**: Your request will be retried by the SDK Client indefinitely. You can use [`asyncio.timeout`](https://docs.python.org/3/library/asyncio-task.html#timeouts) to impose a timeout. This raises a [`temporalio.client.WorkflowUpdateRPCTimeoutOrCancelledError`](https://python.temporal.io/temporalio.client.WorkflowUpdateRPCTimeoutOrCancelledError.html) exception. - **Update failed**: You'll receive a [`temporalio.client.WorkflowUpdateFailedError`](https://python.temporal.io/temporalio.client.WorkflowUpdateFailedError.html) exception. There are two ways this can happen: - The Update was rejected by an Update validator defined in the Workflow alongside the Update handler. - The Update failed after having been accepted. Update failures are like [Workflow failures](/references/failures#errors-in-workflows). Issues that cause a Workflow failure in the main method also cause Update failures in the Update handler. These might include: - A failed Child Workflow - A failed Activity (if the Activity retries have been set to a finite number) - The Workflow author raising [`ApplicationError`](/references/failures#application-failure) - Any error listed in [workflow_failure_exception_types](https://python.temporal.io/temporalio.worker.Worker.html) (empty by default) - **The handler caused the Workflow Task to fail**: A [Workflow Task Failure](/references/failures#errors-in-workflows) causes the server to retry Workflow Tasks indefinitely. What happens to your Update request depends on its stage: - If the request hasn't been accepted by the server, you receive a `FAILED_PRECONDITION` [`temporalio.service.RPCError`](https://python.temporal.io/temporalio.service.RPCError.html) exception. - If the request has been accepted, it is durable. Once the Workflow is healthy again after a code deploy, use an [`UpdateHandle`](https://python.temporal.io/temporalio.client.WorkflowUpdateHandle.html) to fetch the Update result. - **The Workflow finished while the Update handler execution was in progress**: You'll receive a [`temporalio.service.RPCError`](https://python.temporal.io/temporalio.service.RPCError.html) exception with a `status` attribute of [`RPCStatusCode`](https://python.temporal.io/temporalio.service.RPCStatusCode.html) `NOT_FOUND`. This happens if the Workflow finished while the Update handler execution was in progress, for example because - The Workflow was canceled or failed. - The Workflow completed normally or continued-as-new and the Workflow author did not [wait for handlers to be finished](/handling-messages#finishing-message-handlers). ### Problems when sending a Query {#query-problems} When working with Queries, in addition to the `RPCError`s described above, you may encounter these errors: - **There is no Workflow Worker polling the Task Queue**: You'll receive a [`temporalio.service.RPCError`](https://python.temporal.io/temporalio.service.RPCError.html) exception on which the `status` attribute is [`RPCStatusCode`](https://python.temporal.io/temporalio.service.RPCStatusCode.html) `FAILED_PRECONDITION`. - **Query failed**: You'll receive a [`temporalio.client.WorkflowQueryFailedError`](https://python.temporal.io/temporalio.client.WorkflowQueryFailedError.html) exception if something goes wrong during a Query. Any exception in a Query handler will trigger this error. This differs from Signal and Update requests, where exceptions can lead to Workflow Task Failure instead. - **The handler caused the Workflow Task to fail.** This would happen, for example, if the Query handler blocks the thread for too long without yielding. ## Dynamic components {#dynamic-handler} A dynamic Workflow, Activity, Signal, Update, or Query is a kind of unnamed item. Normally, these items are registered by name with the Worker and invoked at runtime. When an unregistered or unrecognized Workflow, Activity, or message request arrives with a recognized method signature, the Worker can use a pre-registered dynamic stand-in. For example, you might send a request to start a Workflow named "MyUnknownWorkflow". After receiving a Workflow Task, the Worker may find that there's no registered Workflow Definitions of that type. It then checks to see if there's a registered dynamic Workflow. If the dynamic Workflow signature matches the incoming Workflow signature, the Worker invokes that just as it would invoke a non-dynamic statically named version. By registering dynamic versions of your Temporal components, the Worker can fall back to these alternate implementations for name mismatches. :::caution Use dynamic elements judiciously and as a fallback mechanism, not a primary design. They can introduce long-term maintainability and debugging issues. Reserve dynamic invocation use for cases where a name is not or can't be known at compile time. ::: ### Set a dynamic Signal, Query, or Update handler {#set-a-dynamic-signal} A dynamic Signal, Query, or Update refers to a special stand-in handler. It's used when an unregistered handler request arrives. Consider a Signal, where you might send something like `workflow.signal(MyWorkflow.my_signal_method, my_arg)`. This is a type-safe compiler-checked approach that guarantees a method exists. There's also a non-type-safe string-based form: `workflow.signal('some-name', my_arg)`. When sent to the server, the name is checked only after arriving at the Worker. This is where "dynamic handlers" come in. After failing to find a handler with a matching name and type, the Worker checks for a registered dynamic stand-in handler. If found, the Worker uses that instead. You must opt handlers into dynamic access. Add `dynamic=True` to the handler decorator (for example, `@workflow.signal(dynamic=True)`) to make a handler dynamic. The handler's signature must accept `(self, name: str, args: Sequence[RawValue])`. Use a [payload_converter](https://python.temporal.io/temporalio.workflow.html#payload_converter) function to convert `RawValue` objects to your required type. For example: ```python from typing import Sequence from temporalio.common import RawValue ... @workflow.signal(dynamic=True) async def dynamic_signal(self, name: str, args: Sequence[RawValue]) -> None: ... ``` This sample creates a `dynamic_signal` Signal. When an unregistered or unrecognized Signal arrives with a matching signature, dynamic assignment uses this handler to manage the Signal. It is responsible for transforming the sequence contents into usable data in a form that the method's logic can process and act on. ### Set a dynamic Workflow {#set-a-dynamic-workflow} A dynamic Workflow refers to a special stand-in Workflow Definition. It's used when an unknown Workflow Execution request arrives. Consider the "MyUnknownWorkflow" example described earlier. The Worker may find there's no registered Workflow Definitions of that name or type. After failing to find a Workflow Definition with a matching type, the Worker looks for a dynamic stand-in. If found, it invokes that instead. To participate, your Workflow must opt into dynamic access. Adding `dynamic=True` to the `@workflow.defn` decorator makes the Workflow Definition eligible to participate in dynamic invocation. You must register the Workflow with the [Worker](https://python.temporal.io/temporalio.worker.html) before it can be invoked. The Workflow Definition's primary Workflow method must accept a single argument of type `Sequence[temporalio.common.RawValue]`. Use a [payload_converter](https://python.temporal.io/temporalio.workflow.html#payload_converter) function to convert `RawValue` objects to your required type. For example: See full sample ```python --- # ... @workflow.defn(dynamic=True) class DynamicWorkflow: @workflow.run async def run(self, args: Sequence[RawValue]) -> str: name = workflow.payload_converter().from_payload(args[0].payload, str) return await workflow.execute_activity( default_greeting, YourDataClass("Hello", name), start_to_close_timeout=timedelta(seconds=10), ) ``` This Workflow converts the first `Sequence` element to a string, and uses that to execute an Activity. ### Set a dynamic Activity {#set-a-dynamic-activity} A dynamic Activity is a stand-in implementation. It's used when an Activity Task with an unknown Activity type is received by the Worker. To participate, your Activity must opt into dynamic access. Adding `dynamic=True` to the `@activity.defn` decorator makes the Workflow Definition eligible to participate in dynamic invocation. You must register the Activity with the [Worker](https://python.temporal.io/temporalio.worker.html) before it can be invoked. The Activity Definition must then accept a single argument of type `Sequence[temporalio.common.RawValue]`. Use a [payload_converter](https://python.temporal.io/temporalio.activity.html#payload_converter) function to convert `RawValue` objects to your required types. For example: See full sample ```python --- # ... @activity.defn(dynamic=True) async def dynamic_greeting(args: Sequence[RawValue]) -> str: arg1 = activity.payload_converter().from_payload(args[0].payload, YourDataClass) return ( f"{arg1.greeting}, {arg1.name}!\nActivity Type: {activity.info().activity_type}" ) --- # ... @workflow.defn class GreetingWorkflow: @workflow.run async def run(self, name: str) -> str: return await workflow.execute_activity( "unregistered_activity", YourDataClass("Hello", name), start_to_close_timeout=timedelta(seconds=10), ) ``` This example invokes an unregistered Activity by name. The Worker resolves it using the registered dynamic Activity instead. When possible, prefer to use compiler-checked type-safe arguments rather than Activity name strings. --- ## Observability - Python SDK The observability section of the Temporal Developer's guide covers the many ways to view the current state of your [Temporal Application](/temporal#temporal-application)—that is, ways to view which [Workflow Executions](/workflow-execution) are tracked by the [Temporal Platform](/temporal#temporal-platform) and the state of any specified Workflow Execution, either currently or at points of an execution. This section covers features related to viewing the state of the application, including: - [Emit metrics](#metrics) - [Set up tracing](#tracing) - [Log from a Workflow](#logging) - [Visibility APIs](#visibility) ## Emit metrics {#metrics} **How to emit metrics** Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. For a complete list of metrics capable of being emitted, see the [SDK metrics reference](/references/sdk-metrics). - For an overview of Prometheus and Grafana integration, refer to the [Monitoring](/self-hosted-guide/monitoring) guide. - For a list of metrics, see the [SDK metrics reference](/references/sdk-metrics). - For an end-to-end example that exposes metrics with the Python SDK, refer to the [samples-python](https://github.com/temporalio/samples-python/tree/main/prometheus) repo. Metrics in Python are configured globally; therefore, you should set a Prometheus endpoint before any other Temporal code. The following example exposes a Prometheus endpoint on port `9000`. ```python from temporalio.runtime import Runtime, TelemetryConfig, PrometheusConfig --- # Create a new runtime that has telemetry enabled. Create this first to avoid --- # the default Runtime from being lazily created. new_runtime = Runtime(telemetry=TelemetryConfig(metrics=PrometheusConfig(bind_address="0.0.0.0:9000"))) my_client = await Client.connect("my.temporal.host:7233", runtime=new_runtime) ``` ## Set up tracing {#tracing} **How to set up tracing** Tracing allows you to view the call graph of a Workflow along with its Activities and any Child Workflows. Temporal Web's tracing capabilities mainly track Activity Execution within a Temporal context. If you need custom tracing specific for your use case, you should make use of context propagation to add tracing logic accordingly. To configure tracing in Python, install the `opentelemetry` dependencies. ```bash --- # This command installs the `opentelemetry` dependencies. pip install temporalio[opentelemetry] ``` Then the [`temporalio.contrib.opentelemetry.TracingInterceptor`](https://python.temporal.io/temporalio.contrib.opentelemetry.TracingInterceptor.html) class can be set as an interceptor as an argument of [`Client.connect()`](https://python.temporal.io/temporalio.client.Client.html#connect). When your Client is connected, spans are created for all Client calls, Activities, and Workflow invocations on the Worker. Spans are created and serialized through the server to give one trace for a Workflow Execution. ## Log from a Workflow {#logging} Logging enables you to record critical information during code execution. Loggers create an audit trail and capture information about your Workflow's operation. An appropriate logging level depends on your specific needs. During development or troubleshooting, you might use debug or even trace. In production, you might use info or warn to avoid excessive log volume. The logger supports the following logging levels: | Level | Use | | ------- | --------------------------------------------------------------------------------------------------------- | | `TRACE` | The most detailed level of logging, used for very fine-grained information. | | `DEBUG` | Detailed information, typically useful for debugging purposes. | | `INFO` | General information about the application's operation. | | `WARN` | Indicates potentially harmful situations or minor issues that don't prevent the application from working. | | `ERROR` | Indicates error conditions that might still allow the application to continue running. | The Temporal SDK core normally uses `WARN` as its default logging level. **How to log from a Workflow** Send logs and errors to a logging service, so that when things go wrong, you can see what happened. The SDK core uses `WARN` for its default logging level. You can log from a Workflow using Python's standard library, by importing the logging module `logging`. Set your logging configuration to a level you want to expose logs to. The following example sets the logging information level to `INFO`. ```python logging.basicConfig(level=logging.INFO) ``` Then in your Workflow, set your [`logger`](https://python.temporal.io/temporalio.workflow.html#logger) and level on the Workflow. The following example logs the Workflow. View the source code {' '} in the context of the rest of the application code. ```python --- # ... workflow.logger.info("Workflow input parameter: %s" % name) ``` ### Custom logger {#custom-logger} Use a custom logger for logging. Use the built-in [Logging facility for Python](https://docs.python.org/3/library/logging.html) to set a custom logger. ## Visibility APIs {#visibility} The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Temporal Service. ### Use Search Attributes {#search-attributes} The typical method of retrieving a Workflow Execution is by its Workflow Id. However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments. You can do this with [Search Attributes](/search-attribute). - [Default Search Attributes](/search-attribute#default-search-attribute) like `WorkflowType`, `StartTime` and `ExecutionStatus` are automatically added to Workflow Executions. - _Custom Search Attributes_ can contain their own domain-specific data (like `customerId` or `numItems`). - A few [generic Custom Search Attributes](/search-attribute#custom-search-attribute) like `CustomKeywordField` and `CustomIntField` are created by default in Temporal's [Docker Compose](https://github.com/temporalio/docker-compose). The steps to using custom Search Attributes are: - Create a new Search Attribute in your Temporal Service in the Temporal CLI or Web UI. - For example: `temporal operator search-attribute create --name CustomKeywordField --type Text` - Replace `CustomKeywordField` with the name of your Search Attribute. - Replace `Text` with a type value associated with your Search Attribute: `Text` | `Keyword` | `Int` | `Double` | `Bool` | `Datetime` | `KeywordList` - Set the value of the Search Attribute for a Workflow Execution: - On the Client by including it as an option when starting the Execution. - In the Workflow by calling `upsert_search_attributes`. - Read the value of the Search Attribute: - On the Client by calling `DescribeWorkflow`. - In the Workflow by looking at `WorkflowInfo`. - Query Workflow Executions by the Search Attribute using a [List Filter](/list-filter): - [In the Temporal CLI](/cli/operator#list-2) - In code by calling `ListWorkflowExecutions`. Here is how to query Workflow Executions: Use the [list_workflows()](https://python.temporal.io/temporalio.client.Client.html#list_workflows) method on the Client handle and pass a [List Filter](/list-filter) as an argument to filter the listed Workflows. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async for workflow in client.list_workflows('WorkflowType="GreetingWorkflow"'): print(f"Workflow: {workflow.id}") ``` ### How to set custom Search Attributes {#custom-search-attributes} After you've created custom Search Attributes in your Temporal Service (using `temporal operator search-attribute create`or the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow. Use `SearchAttributeKey` to create your Search Attributes. Then, when starting a Workflow execution using `client.start_workflow()`, include the Custom Search Attributes by passing instances of `SearchAttributePair()` containing each of your keys and starting values to a parameter called `search_attributes`. If you had Custom Search Attributes `CustomerId` of type `Keyword` and `MiscData` of type `Text`, you could provide these starting values: ```python customer_id_key = SearchAttributeKey.for_keyword("CustomerId") misc_data_key = SearchAttributeKey.for_text("MiscData") handle = await client.start_workflow( GreetingWorkflow.run, id="search-attributes-workflow-id", task_queue="search-attributes-task-queue", search_attributes=TypedSearchAttributes([ SearchAttributePair(customer_id_key, "customer_1"), SearchAttributePair(misc_data_key, "customer_1_data") ]), ) ``` In this example, `CustomerId` and `MiscData` are set as Search Attributes. These attributes are useful for querying Workflows based on the customer ID or the date the order was placed. ### Upsert Search Attributes {#upsert-search-attributes} You can upsert Search Attributes to add or update Search Attributes from within Workflow code. To upsert custom Search Attributes, use the [`upsert_search_attributes()`](https://python.temporal.io/temporalio.workflow.html#upsert_search_attributes) method to pass a list of `SearchAttributeUpdate()`. These can be created via value_set calls on Search Attribute keys: ```python workflow.upsert_search_attributes([ customer_id_key.value_set("customer_2") ]) ``` ### Remove a Search Attribute from a Workflow {#remove-search-attribute} To remove a Search Attribute that was previously set, use `value_unset call` on the Search Attribute key. ```python workflow.upsert_search_attributes([ customer_id_key.value_unset() ]) ``` --- ## Temporal Python SDK sandbox environment The Temporal Python SDK enables you to run Workflow code in a sandbox environment to help prevent non-determinism errors in your application. The Temporal Workflow Sandbox for Python is not completely isolated, and some libraries can internally mutate state, which can result in breaking determinism. ## Benefits Temporal's Python SDK uses a sandbox environment for Workflow runs to make developing Workflow code safer. If a Workflow Execution performs a non-deterministic event, an exception is thrown, which results in failing the Task Worker. The Workflow will not progress until the code is fixed. The Temporal Python sandbox offers a mechanism to _pass through modules_ from outside the sandbox. By default, this includes all standard library modules and Temporal modules. For performance and behavior reasons, users should pass through all models, Activities, Nexus services, or other modules that are in separate files whose calls will be deterministic. For more information, see [Passthrough modules](#passthrough-modules). ## How it works The Sandbox environment consists of two main components. - [Global state isolation](#global-state-isolation) - [Restrictions](#restrictions) ### Global state isolation The first component of the Sandbox is a global state isolation. Global state isolation uses `exec` to compile and evaluate statements. Upon the start of a Workflow, the file in which the Workflow is defined is imported into a newly created sandbox. If a module is imported by the file, a known set, which includes all of Python's standard library, is _passed through_ from outside the sandbox. These modules are expected to be free of side effects and have their non-deterministic aspects restricted. For a full list of modules imported, see [Customize the Sandbox](#customize-the-sandbox). ### Restrictions Restrictions prevent known non-deterministic library calls. This is achieved by using proxy objects on modules wrapped around the custom importer set in the sandbox. Restrictions apply at both the Workflow import level and the Workflow run time. A default set of restrictions that prevents most dangerous standard library calls. ## Skip Workflow Sandboxing The following techniques aren't recommended, but they allow you to avoid, skip, or break through the sandbox environment. Skipping Workflow Sandboxing results in a lack of determinism checks. Using the Workflow Sandboxing environment helps to preventing non-determinism errors but doesn't completely negate the risk. ### Skip Sandboxing for a block of code To skip a sandbox environment for a specific block of code in a Workflow, use [`sandbox_unrestricted()`](https://python.temporal.io/temporalio.workflow.unsafe.html#sandbox_unrestricted). The Workflow will run without sandbox restrictions. ```python with temporalio.workflow.unsafe.sandbox_unrestricted(): # Your code ``` ### Skip Sandboxing for an entire Workflow To skip a sandbox environment for a Workflow, set the `sandboxed` argument in the [`@workflow.defn`](https://python.temporal.io/temporalio.workflow.html#defn) decorator to false. The entire Workflow will run without sandbox restrictions. ```python @workflow.defn(sandboxed=False) ``` ### Skip Sandboxing for a Worker To skip a sandbox environment for a Worker, set the `workflow_runner` keyword argument of the `Worker` init to [`UnsandboxedWorkflowRunner()`](https://python.temporal.io/temporalio.worker.UnsandboxedWorkflowRunner.html). ## Customize the sandbox When creating the Worker, the `workflow_runner` defaults to [`SandboxedWorkflowRunner()`](https://python.temporal.io/temporalio.worker.workflow_sandbox.SandboxedWorkflowRunner.html). The `SandboxedWorkflowRunner` init accepts a `restrictions` keyword argument that defines a set of restrictions to apply to this sandbox. The [`SandboxRestrictions`](https://python.temporal.io/temporalio.worker.workflow_sandbox.SandboxRestrictions.html) dataclass is immutable and contains four fields that can be customized, but only three have notable values. - [`passthrough_modules`](https://python.temporal.io/temporalio.worker.workflow_sandbox.SandboxRestrictions.html#passthrough_modules) - [`invalid_modules_members`](https://python.temporal.io/temporalio.worker.workflow_sandbox.SandboxRestrictions.html#invalid_module_members) - [`import_notification_policy`](https://python.temporal.io/temporalio.worker.workflow_sandbox.SandboxRestrictions.html#import_notificaton_policy) ### Passthrough modules By default, the sandbox completely reloads non-standard-library and non-Temporal modules for every Workflow run. Passing through a module means that the module will not be reloaded every time the Workflow runs. Instead, the module will be imported from outside the sandbox and used directly in the Workflow. This can improve performance because importing a module can be a time-consuming process, and passing through a module can avoid this overhead. :::note It is important to note that you should only import _known-side-effect-free_ third-party modules: meaning they don't have any unintended consequences when imported and used multiple times. This is because passing through a module means that it will be used multiple times in a workflow without being reloaded, so any side effects it has will be repeated. For this reason, it's recommended to only pass through modules that are known to be deterministic, meaning they will always produce the same output given the same input. ::: One way to pass through a module is at import time in the Workflow file using the [`imports_passed_through`](https://python.temporal.io/temporalio.workflow.unsafe.html#imports_passed_through) context manager. ```python --- # my_workflow_file.py from temporalio import workflow with workflow.unsafe.imports_passed_through(): @workflow.defn class MyWorkflow: # ... ``` Alternatively, this can be done at Worker creation time by customizing the runner's restrictions. ```python --- # my_worker_file.py from temporalio.worker import Worker from temporalio.worker.workflow_sandbox import SandboxedWorkflowRunner, SandboxRestrictions my_worker = Worker( ..., workflow_runner=SandboxedWorkflowRunner( restrictions=SandboxRestrictions.default.with_passthrough_modules("pydantic") ) ) ``` In both of these cases, now the `pydantic` module will be passed through from outside the sandbox instead of being reloaded for every Workflow run. ### Invalid module members `invalid_module_members` includes modules that cannot be accessed. Checks are compared against the fully qualified path to the item. For example, to remove a restriction on `datetime.date.today()`, see the following example. ```python --- # my_worker_file.py from temporalio.worker import Worker from temporalio.worker.workflow_sandbox import SandboxedWorkflowRunner, SandboxRestrictions my_restrictions = dataclasses.replace( SandboxRestrictions.default, invalid_module_members=SandboxRestrictions.invalid_module_members_default.with_child_unrestricted( "datetime", "date", "today", ), ) my_worker = Worker(..., workflow_runner=SandboxedWorkflowRunner(restrictions=my_restrictions)) ``` Restrictions can also be added by piping (`|`) together [`SandboxMatcher`](https://python.temporal.io/temporalio.worker.workflow_sandbox.SandboxMatcher.html) instances. The following example restricts the `datetime.date` class from being used. ```python --- # my_worker_file.py from temporalio.worker import Worker from temporalio.worker.workflow_sandbox import ( SandboxedWorkflowRunner, SandboxMatcher, SandboxRestrictions, ) my_restrictions = dataclasses.replace( SandboxRestrictions.default, invalid_module_members=SandboxRestrictions.invalid_module_members_default | SandboxMatcher( children={"datetime": SandboxMatcher(use={"date"})}, ), ) my_worker = Worker(..., workflow_runner=SandboxedWorkflowRunner(restrictions=my_restrictions)) ``` ### Import Notification Policy The sandbox's import notification policy specifies how the sandbox behaves when it imports modules in a way that may be unintentional. It covers two common scenarios: dynamic imports and enforcing module passthrough. Each can be controlled independently. A dynamic import occurs when a module is imported after the Workflow is loaded into the sandbox. These imports are often invisible and, if they don't do anything restricted by the sandbox, cause memory overhead. By default the [`WARN_ON_DYNAMIC_IMPORT`](https://python.temporal.io/temporalio.workflow.SandboxImportNotificationPolicy.html#WARN_ON_DYNAMIC_IMPORT) policy setting is enabled and a warning will be emitted when a module that is not in the [passthrough modules](#passthrough-modules) list is dynamically imported. The other notable policy settings apply when a module is imported into the sandbox that was not passed through. These settings are disabled by default and must be explicitly turned on. The [`WARN_ON_UNINTENTIONAL_PASSTHROUGH`](https://python.temporal.io/temporalio.workflow.SandboxImportNotificationPolicy.html#WARN_ON_UNINTENTIONAL_PASSTHROUGH) setting emits a warning when a module not included in the [passthrough modules](#passthrough-modules) list. Similarly, the [`RAISE_ON_UNINTENTIONAL_PASSTHROUGH`](https://python.temporal.io/temporalio.workflow.SandboxImportNotificationPolicy.html#RAISE_ON_UNINTENTIONAL_PASSTHROUGH) setting will raise an error when an non-passed-through module is imported. The import notification policy can be set for specific imports by using [`sandbox_import_notification_policy`](https://python.temporal.io/temporalio.workflow.unsafe.html#sandbox_import_notification_policy) context manager. ```python --- # my_workflow_file.py from temporalio import workflow with workflow.unsafe.sandbox_import_notification_policy( workflow.SandboxImportNotificationPolicy.SILENT ): @workflow.defn class MyWorkflow: # ... ``` This can also be done at worker creation time by customizing the runner's restrictions. ```python --- # my_worker_file.py from temporalio.worker import Worker from temporalio.worker.workflow_sandbox import SandboxedWorkflowRunner, SandboxRestrictions my_worker = Worker( ..., workflow_runner=SandboxedWorkflowRunner( restrictions=SandboxRestrictions.default.with_import_notification_policy( workflow.SandboxImportNotificationPolicy.WARN_ON_DYNAMIC_IMPORT | workflow.SandboxImportNotificationPolicy.WARN_ON_UNINTENTIONAL_PASSTHROUGH ) ) ) ``` The [`sandbox_import_notification_policy`](https://python.temporal.io/temporalio.workflow.unsafe.html#sandbox_import_notification_policy) context manager will always be respected if used in combination with the restrictions customization. For more information on the Python sandbox, see the following resources. - [Python SDK README](https://github.com/temporalio/sdk-python) - [Python API docs](https://python.temporal.io/index.html) --- ## Temporal Python SDK synchronous vs. asynchronous Activity implementations The Temporal Python SDK supports multiple ways of implementing an Activity: - Asynchronously using [`asyncio`](https://docs.python.org/3/library/asyncio.html) - Synchronously multithreaded using [`concurrent.futures.ThreadPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor) - Synchronously multiprocess using [`concurrent.futures.ProcessPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor) and [`multiprocessing.managers.SyncManager`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.managers.SyncManager) It is important to implement your Activities using the correct method, otherwise your application may fail in sporadic and unexpected ways. Which one you should use depends on your use case. This section provides guidance to help you choose the best approach. ## The Python Asynchronous Event Loop and Blocking Calls First, let's look at how async event loops work in Python. The Python async event loop runs in a thread and executes all tasks in its thread. When any task is running in the event loop, the loop is blocked and no other tasks can be running at the same time within that event loop. Whenever a task executes an `await` expression, the task is suspended, and the event loop begins or resumes execution of another task. This means that the event loop can only pass the flow of control when the `await` keyword is executed. If a program makes a blocking call, such as one that reads from a file, makes a synchronous request to a network service, waits for user input, or anything else that blocks the execution, the entire event loop must wait until that execution has completed. Blocking the async event loop in Python would turn your asynchronous program into a synchronous program that executes serially, defeating the entire purpose of using `asyncio`. This can also lead to potential deadlock, and unpredictable behavior that causes tasks to be unable to execute. Debugging these issues can be difficult and time consuming, as locating the source of the blocking call might not always be immediately evident. Due to this, Python developers must be extra careful to not make blocking calls from within an asynchronous Activity, or use an async safe library to perform these actions. For example, making an HTTP call with the popular `requests` library within an asynchronous Activity would lead to blocking your event loop. If you want to make an HTTP call from within an asynchronous Activity, you should use an async-safe HTTP library such as `aiohttp` or `httpx`. Otherwise, use a synchronous Activity. ## Python SDK Worker Execution Architecture Python workers have the following components for executing code: - Your event loop, which runs Tasks from async Activities **plus the rest of the Temporal Worker, such as communicating with the server**. - An executor for executing Activity Tasks from synchronous Activities. A thread pool executor is recommended. - A thread pool executor for executing Workflow Tasks. > See Also: [docs for](https://python.temporal.io/temporalio.worker.Worker.html#__init__) `worker.__init__()` ### Activities - Async Activities and the temporal worker SDK code both run the default asyncio event loop or whatever event loop you give the Worker. - Synchronous Activities run in the `activity_executor`. ### Workflows Since Workflow Tasks have the following three properties, they're run in threads. - are CPU bound - need to be timed out for deadlock detection - need to not block other Workflow Tasks The `workflow_task_executor` is the thread pool these Tasks are run on. The fact that Workflow Tasks run in a thread pool can be confusing at first because Workflow Definitions are `async`. The key differentiator is that the `async` in Workflow Definitions isn't referring to the standard event loop -- it's referring to the Workflow's own event loop. Each Workflow gets its own “Workflow event loop,” which is deterministic, and described in [the Python SDK blog](https://temporal.io/blog/durable-distributed-asyncio-event-loop#temporal-workflows-are-asyncio-event-loops). The Workflow event loop doesn't constantly loop -- it just gets cycled through during a Workflow Task to make as much progress as possible on all of its futures. When it can no longer make progress on any of its futures, then the Workflow Task is complete. ### Number of CPU cores The only ways to use more than one core in a python Worker (considering Python's GIL) are: - Run more than one Worker Process. - Run the synchronous Activities in a process pool executor, but a thread pool executor is recommended. ### A Worker infrastructure option: Separate Activity and Workflow Workers To reduce the risk of event loops or executors getting blocked, some users choose to deploy separate Workers for Workflow Tasks and Activity Tasks. ## Activity Definition **By default, Activities should be synchronous rather than asynchronous**. You should only make an Activity asynchronous if you are certain that it doesn't block the event loop. This is because if you have blocking code in an `async def` function, it blocks your event loop and the rest of Temporal, which can cause bugs that are hard to diagnose, including freezing your worker and blocking Workflow progress (because Temporal can't tell the server that Workflow Tasks are completing). The reason synchronous Activities help is because they run in the `activity_executor` ([docs for](https://python.temporal.io/temporalio.worker.Worker.html#__init__) `worker.__init__()`) rather than in the global event loop, which helps because: - There's no risk of accidentally blocking the global event loop. - If you have multiple Activity Tasks running in a thread pool rather than an event loop, one bad Activity Task can't slow down the others; this is because the OS scheduler preemptively switches between threads, which the event loop coordinator doesn't do. > See Also: > ["Types of Activities" section of Python SDK README](https://github.com/temporalio/sdk-python#types-of-activities) ## How to implement Synchronous Activities The following code is a synchronous Activity Definition. It takes a name (`str`) as input and returns a customized greeting (`str`) as output. It makes a call to a microservice, and when making this call, you'll notice that it uses the `requests` library. This is safe to do in synchronous Activities. ```python from temporalio import activity class TranslateActivities: @activity.defn def greet_in_spanish(self, name: str) -> str: greeting = self.call_service("get-spanish-greeting", name) return greeting # Utility method for making calls to the microservices def call_service(self, stem: str, name: str) -> str: base = f"http://localhost:9999/{stem}" url = f"{base}?name={urllib.parse.quote(name)}" response = requests.get(url) return response.text ``` The preceding example doesn't share a session across the Activity, so `__init__` was removed. While `requests` does have the ability to create sessions, it's currently unknown if they're thread safe. Due to no longer having or needing `__init__`, the case could be made here to not implement the Activities as a class, but just as decorated functions as shown here: ```python @activity.defn def greet_in_spanish(name: str) -> str: greeting = call_service("get-spanish-greeting", name) return greeting --- # Utility method for making calls to the microservices def call_service(stem: str, name: str) -> str: base = f"http://localhost:9999/{stem}" url = f"{base}?name={urllib.parse.quote(name)}" response = requests.get(url) return response.text ``` Whether to implement Activities as class methods or functions is a design choice left up to the developer when cross-activity state isn't needed. Both are equally valid implementations. ### How to run Synchronous Activities on a Worker When running synchronous Activities, the Worker needs to have an `activity_executor`. Temporal recommends using a `ThreadPoolExecutor` as shown here: ```python with ThreadPoolExecutor(max_workers=42) as executor: worker = Worker( # ... activity_executor=executor, # ... ) ``` ## How to Implement Asynchronous Activities The following code is an implementation of the preceding Activity, but as an asynchronous Activity Definition. It makes a call to a microservice, accessed through HTTP, to request this greeting in Spanish. This Activity uses the `aiohttp` library to make an async safe HTTP request. Using the `requests` library here would have resulting in blocking code within the async event loop, which will block the entire async event loop. For more in-depth information about this issue, refer to the [Python asyncio documentation](https://docs.python.org/3/library/asyncio-dev.html#running-blocking-code). The following code also implements the Activity Definition as a class, rather than a function. The `aiohttp` library requires an established `Session` to perform the HTTP request. It would be inefficient to establish a `Session` every time an Activity is invoked, so instead this code accepts a `Session` object as an instance parameter and makes it available to the methods. This approach will also be beneficial when the execution is over and the `Session` needs to be closed. In this example, the Activity supplies the name in the URL and retrieves the greeting from the body of the response. ```python from temporalio import activity class TranslateActivities: def __init__(self, session: aiohttp.ClientSession): self.session = session @activity.defn async def greet_in_spanish(self, name: str) -> str: greeting = await self.call_service("get-spanish-greeting", name) return greeting # Utility method for making calls to the microservices async def call_service(self, stem: str, name: str) -> str: base = f"http://localhost:9999/{stem}" url = f"{base}?name={urllib.parse.quote(name)}" async with self.session.get(url) as response: translation = await response.text() if response.status >= 400: raise ApplicationError( f"HTTP Error {response.status}: {translation}", # We want to have Temporal automatically retry 5xx but not 4xx non_retryable=response.status < 500, ) return translation ``` ### How to run synchronous code from an asynchronous activity If your Activity is asynchronous and you don't want to change it to synchronous, but you need to run blocking code inside it, then you can use python utility functions to run synchronous code in an asynchronous function: - [`loop.run_in_executor()`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor), which is also mentioned in the ["running blocking code" section of the "developing with asyncio" guide](https://docs.python.org/3/library/asyncio-dev.html#running-blocking-code) - [`asyncio.to_thread()`](https://docs.python.org/3/library/asyncio-task.html#running-in-threads) ## When Should You Use Async Activities Asynchronous Activities have many advantages, such as potential speed up of execution. However, as discussed above, making unsafe calls within the async event loop can cause sporadic and difficult to diagnose bugs. For this reason, we recommend using asynchronous Activities _only_ when you are certain that your Activities are async safe and don't make blocking calls. If you experience bugs that you think may be a result of an unsafe call being made in an asynchronous Activity, convert it to a synchronous Activity and see if the issue resolves. --- ## Schedules - Python SDK This page shows how to do the following: - [Schedule a Workflow](#schedule-a-workflow) - [Create a Scheduled Workflow](#create) - [Backfill a Scheduled Workflow](#backfill) - [Delete a Scheduled Workflow](#delete) - [Describe a Scheduled Workflow](#describe) - [List a Scheduled Workflow](#list) - [Pause a Scheduled Workflow](#pause) - [Trigger a Scheduled Workflow](#trigger) - [Update a Scheduled Workflow](#update) - [Temporal Cron Jobs](#temporal-cron-jobs) - [Start Delay](#start-delay) ## Schedule a Workflow {#schedule-a-workflow} **How to Schedule a Workflow Execution** Scheduling Workflows is a crucial aspect of any automation process, especially when dealing with time-sensitive tasks. By scheduling a Workflow, you can automate repetitive tasks, reduce the need for manual intervention, and ensure timely execution of your business processes Use any of the following action to help Schedule a Workflow Execution and take control over your automation process. ### Create a Scheduled Workflow {#create} **How to create a Scheduled Workflow** The create action enables you to create a new Schedule. When you create a new Schedule, a unique Schedule ID is generated, which you can use to reference the Schedule in other Schedule commands. To create a Scheduled Workflow Execution in Python, use the [create_schedule()](https://python.temporal.io/temporalio.client.Client.html#create_schedule) asynchronous method on the Client. Then pass the Schedule ID and the Schedule object to the method to create a Scheduled Workflow Execution. Set the `action` parameter to `ScheduleActionStartWorkflow` to start a Workflow Execution. Optionally, you can set the `spec` parameter to `ScheduleSpec` to specify the schedule or set the `intervals` parameter to `ScheduleIntervalSpec` to specify the interval. Other options include: `cron_expressions`, `skip`, `start_at`, and `jitter`. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def main(): client = await Client.connect("localhost:7233") await client.create_schedule( "workflow-schedule-id", Schedule( action=ScheduleActionStartWorkflow( YourSchedulesWorkflow.run, "my schedule arg", id="schedules-workflow-id", task_queue="schedules-task-queue", ), spec=ScheduleSpec( intervals=[ScheduleIntervalSpec(every=timedelta(minutes=2))] ), state=ScheduleState(note="Here's a note on my Schedule."), ), ) ``` :::tip Schedule Auto-Deletion Once a Schedule has completed creating all its Workflow Executions, the Temporal Service deletes it since it won’t fire again. The Temporal Service doesn't guarantee when this removal will happen. ::: ### Backfill a Scheduled Workflow {#backfill} **How to backfill a Scheduled Workflow** The backfill action executes Actions ahead of their specified time range. This command is useful when you need to execute a missed or delayed Action, or when you want to test the Workflow before its scheduled time. To Backfill a Scheduled Workflow Execution in Python, use the [backfill()](https://python.temporal.io/temporalio.client.ScheduleHandle.html#backfill) asynchronous method on the Schedule Handle. View the source code {' '} in the context of the rest of the application code. ```python from datetime import datetime, timedelta from temporalio.client import Client, ScheduleBackfill, ScheduleOverlapPolicy async def main(): client = await Client.connect("localhost:7233") handle = client.get_schedule_handle( "workflow-schedule-id", ) now = datetime.utcnow() ( await handle.backfill( ScheduleBackfill( start_at=now - timedelta(minutes=10), end_at=now - timedelta(minutes=9), overlap=ScheduleOverlapPolicy.ALLOW_ALL, ), ), ) ``` ### Delete a Scheduled Workflow {#delete} **How to delete a Scheduled Workflow** The delete action enables you to delete a Schedule. When you delete a Schedule, it does not affect any Workflows that were started by the Schedule. To delete a Scheduled Workflow Execution in Python, use the [delete()](https://python.temporal.io/temporalio.client.ScheduleHandle.html#delete) asynchronous method on the Schedule Handle. View the source code {' '} in the context of the rest of the application code. ```python async def main(): client = await Client.connect("localhost:7233") handle = client.get_schedule_handle( "workflow-schedule-id", ) await handle.delete() ``` ### Describe a Scheduled Workflow {#describe} **How to describe a Scheduled Workflow** The describe action shows the current Schedule configuration, including information about past, current, and future Workflow Runs. This command is helpful when you want to get a detailed view of the Schedule and its associated Workflow Runs. To describe a Scheduled Workflow Execution in Python, use the [describe()](https://python.temporal.io/temporalio.client.ScheduleHandle.html#delete) asynchronous method on the Schedule Handle. You can get a complete list of the attributes of the Scheduled Workflow Execution from the [ScheduleDescription](https://python.temporal.io/temporalio.client.ScheduleDescription.html) class. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def main(): client = await Client.connect("localhost:7233") handle = client.get_schedule_handle( "workflow-schedule-id", ) desc = await handle.describe() print(f"Returns the note: {desc.schedule.state.note}") ``` ### List a Scheduled Workflow {#list} **How to list a Scheduled Workflow** The list action lists all the available Schedules. This command is useful when you want to view a list of all the Schedules and their respective Schedule IDs. To list all schedules, use the [list_schedules()](https://python.temporal.io/temporalio.client.Client.html#list_schedules) asynchronous method on the Client. If a schedule is added or deleted, it may not be available in the list immediately. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def main() -> None: client = await Client.connect("localhost:7233") async for schedule in await client.list_schedules(): print(f"List Schedule Info: {schedule.info}.") ``` ### Pause a Scheduled Workflow {#pause} **How to pause a Scheduled Workflow** The pause action enables you to pause and unpause a Schedule. When you pause a Schedule, all the future Workflow Runs associated with the Schedule are temporarily stopped. This command is useful when you want to temporarily halt a Workflow due to maintenance or any other reason. To pause a Scheduled Workflow Execution in Python, use the [pause()](https://python.temporal.io/temporalio.client.ScheduleHandle.html#pause) asynchronous method on the Schedule Handle. You can pass a `note` to the `pause()` method to provide a reason for pausing the schedule. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def main(): client = await Client.connect("localhost:7233") handle = client.get_schedule_handle( "workflow-schedule-id", ) await handle.pause(note="Pausing the schedule for now") ``` ### Trigger a Scheduled Workflow {#trigger} **How to trigger a Scheduled Workflow** The trigger action triggers an immediate action with a given Schedule. By default, this action is subject to the Overlap Policy of the Schedule. This command is helpful when you want to execute a Workflow outside of its scheduled time. To trigger a Scheduled Workflow Execution in Python, use the [trigger()](https://python.temporal.io/temporalio.client.ScheduleHandle.html#trigger) asynchronous method on the Schedule Handle. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def main(): client = await Client.connect("localhost:7233") handle = client.get_schedule_handle( "workflow-schedule-id", ) await handle.trigger() ``` ### Update a Scheduled Workflow {#update} **How to update a Scheduled Workflow** The update action enables you to update an existing Schedule. This command is useful when you need to modify the Schedule's configuration, such as changing the start time, end time, or interval. Create a function that takes `ScheduleUpdateInput` and returns `ScheduleUpdate`. To update a Schedule, use a callback to build the update from the description. The following example updates the Schedule to use a new argument. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def update_schedule_simple(input: ScheduleUpdateInput) -> ScheduleUpdate: schedule_action = input.description.schedule.action if isinstance(schedule_action, ScheduleActionStartWorkflow): schedule_action.args = ["my new schedule arg"] return ScheduleUpdate(schedule=input.description.schedule) ``` ## Temporal Cron Jobs {#temporal-cron-jobs} **How to use Temporal Cron Jobs** :::caution Cron support is not recommended We recommend using [Schedules](https://docs.temporal.io/schedule) instead of Cron Jobs. Schedules were built to provide a better developer experience, including more configuration options and the ability to update or pause running Schedules. ::: A [Temporal Cron Job](/cron-job) is the series of Workflow Executions that occur when a Cron Schedule is provided in the call to spawn a Workflow Execution. A Cron Schedule is provided as an option when the call to spawn a Workflow Execution is made. You can set each Workflow to repeat on a schedule with the `cron_schedule` option from either the [`start_workflow()`](https://python.temporal.io/temporalio.client.Client.html#start_workflow) or [`execute_workflow()`](https://python.temporal.io/temporalio.client.Client.html#execute_workflow) asynchronous methods. View the source code {' '} in the context of the rest of the application code. ```python --- # ... result = await client.execute_workflow( CronWorkflow.run, id="your-workflow-id", task_queue="your-task-queue", cron_schedule="* * * * *", ) print(f"Results: {result}") ``` Temporal Workflow Schedule Cron strings follow this format: ``` ┌───────────── minute (0 - 59) │ ┌───────────── hour (0 - 23) │ │ ┌───────────── day of the month (1 - 31) │ │ │ ┌───────────── month (1 - 12) │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday) │ │ │ │ │ * * * * * ``` ## Start Delay {#start-delay} **How to use Start Delay** Use the `start_delay` to schedule a Workflow Execution at a specific one-time future point rather than on a recurring schedule. Use the `start_delay` option in either the [`start_workflow()`](https://python.temporal.io/temporalio.client.Client.html#start_workflow) or [`execute_workflow()`](https://python.temporal.io/temporalio.client.Client.html#execute_workflow) asynchronous methods in the Client. ```python async def main(): client = await Client.connect("localhost:7233") result = await client.execute_workflow( YourWorkflow.run, "your name", id="your-workflow-id", task_queue="your-task-queue", start_delay=timedelta(hours=1, minutes=20, seconds=30) ) print(f"Result: {result}") if __name__ == "__main__": asyncio.run(main()) ``` --- ## Set up your local with the Python SDK --- # Quickstart Configure your local development environment to get started developing with Temporal. python3 -V python 3.13.3 }> ## Install Python Make sure you have Python installed. Check your version of Python with the following command. mkdir temporal-project cd temporal-project python3 -m venv env source env/bin/activate pip install temporalio }> ## Install the Temporal Python SDK You should install the Temporal Python SDK in your project using a virtual environment. Create a directory for your Temporal project, switch to the new directory, create a Python virtual environment, activate it, and then install the Temporal SDK. Next, you'll configure a local Temporal Service for development. Install the Temporal CLI using Homebrew: brew install temporal Download the Temporal CLI archive for your architecture: Windows amd64 Windows arm64 Extract it and add temporal.exe to your PATH. Download the Temporal CLI for your architecture: Linux amd64 Linux arm64 Extract the archive and move the temporal binary into your PATH, for example: sudo mv temporal /usr/local/bin }> ## Install Temporal CLI The fastest way to get a development version of the Temporal Service running on your local machine is to use [Temporal CLI](https://docs.temporal.io/cli). Choose your operating system to install Temporal CLI. After installing, open a new Terminal window and start the development server: temporal server start-dev Change the Web UI port The Temporal Web UI may be on a different port in some examples or tutorials. To change the port for the Web UI, use the --ui-port option when starting the server: temporal server start-dev --ui-port 8080 The Temporal Web UI will now be available at http://localhost:8080. }> ## Start the development server Once you've installed Temporal CLI and added it to your PATH, open a new Terminal window and run the following command. This command starts a local Temporal Service. It starts the Web UI, creates the default Namespace, and uses an in-memory database. The Temporal Service will be available on localhost:7233. The Temporal Web UI will be available at http://localhost:8233. Leave the local Temporal Service running as you work through tutorials and other projects. You can stop the Temporal Service at any time by pressing CTRL+C. Once you have everything installed, you're ready to build apps with Temporal on your local machine. ## Run Hello World: Test Your Installation Now let's verify your setup is working by creating and running a complete Temporal application with both a Workflow and Activity. This test will confirm that: - The Temporal Python SDK is properly installed - Your local Temporal Service is running - You can successfully create and execute Workflows and Activities - The communication between components is functioning correctly ### 1. Create the Activity Create an Activity file (activities.py): ```python from temporalio import activity @activity.defn async def greet(name: str) -> str: return f"Hello {name}" ``` An Activity is a normal function or method that executes a single, well-defined action (either short or long running), which often involve interacting with the outside world, such as sending emails, making network requests, writing to a database, or calling an API, which are prone to failure. If an Activity fails, Temporal automatically retries it based on your configuration. ### 2. Create the Workflow Create a Workflow file (workflows.py): ```python from datetime import timedelta from temporalio import workflow with workflow.unsafe.imports_passed_through(): from activities import greet @workflow.defn class SayHelloWorkflow: @workflow.run async def run(self, name: str) -> str: return await workflow.execute_activity( greet, name, schedule_to_close_timeout=timedelta(seconds=10), ) ``` Workflows orchestrate Activities and contain the application logic. Temporal Workflows are resilient. They can run and keep running for years, even if the underlying infrastructure fails. If the application itself crashes, Temporal will automatically recreate its pre-failure state so it can continue right where it left off. ### 3. Create the Worker Create a Worker file (worker.py): ```python from temporalio.client import Client from temporalio.worker import Worker from temporalio import workflow with workflow.unsafe.imports_passed_through(): from workflows import SayHelloWorkflow from activities import greet async def main(): client = await Client.connect("localhost:7233") worker = Worker( client, task_queue="my-task-queue", workflows=[SayHelloWorkflow], activities=[greet], ) print("Worker started.") await worker.run() if __name__ == "__main__": asyncio.run(main()) ``` Run the Worker by opening up a new terminal: ```bash source env/bin/activate python3 worker.py ``` Keep this terminal running - you should see "Worker started" displayed. With your Activity and Workflow defined, you need a Worker to execute them. A Worker polls a Task Queue, that you configure it to poll, looking for work to do. Once the Worker dequeues the Workflow or Activity task from the Task Queue, it then executes that task. Workers are a crucial part of your Temporal application as they're what actually execute the tasks defined in your Workflows and Activities. For more information on Workers, see [Understanding Temporal](/evaluate/understanding-temporal#workers) and a [deep dive into Workers](/workers). ### 4. Execute the Workflow Now that your Worker is running, it's time to start a Workflow Execution. This final step will validate that everything is working correctly with your file labeled `starter.py`. Create a separate file called `starter.py`: ```python from temporalio.client import Client async def main(): client = await Client.connect("localhost:7233") result = await client.execute_workflow( "SayHelloWorkflow", "Temporal", id=f"say-hello-workflow-{uuid.uuid4()}", task_queue="my-task-queue", ) print("Workflow result:", result) if __name__ == "__main__": asyncio.run(main()) ``` While the Worker is still running, run the following command in a new terminal: ```bash source env/bin/activate python3 starter.py ``` ### Verify Success If everything is working correctly, you should see: - Worker processing the Workflow and Activity - Output: `Workflow result: Hello Temporal` - Workflow Execution details in the [Temporal Web UI](http://localhost:8233) Next: Run your first Temporal Application Create a basic Workflow and run it with the Temporal Python SDK --- ## Temporal Client - Python SDK A [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) enables you to communicate with the Temporal Service. Communication with a Temporal Service lets you perform actions such as starting Workflow Executions, sending Signals and Queries to Workflow Executions, getting Workflow results, and more. This page shows you how to do the following using the Python SDK with the Temporal Client: - [Connect to a local development Temporal Service](#connect-to-development-service) - [Connect to Temporal Cloud](#connect-to-temporal-cloud) - [Start a Workflow Execution](#start-workflow-execution) - [Get Workflow results](#get-workflow-results) A Temporal Client cannot be initialized and used inside a Workflow. However, it is acceptable and common to use a Temporal Client inside an Activity to communicate with a Temporal Service. ## Connect to development Temporal Service {#connect-to-development-service} Use [`Client.connect`](https://python.temporal.io/temporalio.client.Client.html#connect) to create a client. Connection options include the Temporal Server address, Namespace, and (optionally) TLS configuration. You can provide these options directly in code, load them from **environment variables**, or a **TOML configuration file** using the [`envconfig`](https://python.temporal.io/temporalio.envconfig.html) helpers. We recommend environment variables or a configuration file for secure, repeatable configuration. When you’re running a Temporal Service locally (such as with the [Temporal CLI dev server](https://docs.temporal.io/cli/server#start-dev)), the required options are minimal. If you don't specify a host/port, most connections default to `127.0.0.1:7233` and the `default` Namespace. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the configuration file path, the SDK looks for it at the path `~/.config/temporalio/temporal.toml` or the equivalent on your OS. Refer to [Environment Configuration](../environment-configuration.mdx#configuration-methods) for more details about configuration files and profiles. :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines two profiles: `default` and `prod`. Each profile has its own set of connection options. ```toml title="config.toml" --- # Default profile for local development [profile.default] address = "localhost:7233" namespace = "default" --- # Optional: Add custom gRPC headers [profile.default.grpc_meta] my-custom-header = "development-value" trace-id = "dev-trace-123" --- # Production profile for Temporal Cloud [profile.prod] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" --- # TLS configuration for production [profile.prod.tls] --- # TLS auto-enables when TLS config or an API key is present --- # disabled = false client_cert_path = "/etc/temporal/certs/client.pem" client_key_path = "/etc/temporal/certs/client.key" --- # Custom headers for production [profile.prod.grpc_meta] environment = "production" service-version = "v1.2.3" ``` You can create a Temporal Client using a profile from the configuration file using the `ClientConfig.load_client_connect_config` function as follows. In this example, you load the `default` profile for local development: ```python {23-25} from pathlib import Path from temporalio.client import Client from temporalio.envconfig import ClientConfig async def main(): """ Loads the default profile from the config.toml file in this directory. """ print("--- Loading default profile from config.toml ---") # For this sample to be self-contained, we explicitly provide the path to # the config.toml file included in this directory. # By default though, the config.toml file will be loaded from # ~/.config/temporalio/temporal.toml (or the equivalent standard config directory on your OS). config_file = Path(__file__).parent / "config.toml" # load_client_connect_config is a helper that loads a profile and prepares # the config dictionary for Client.connect. By default, it loads the # "default" profile. connect_config = ClientConfig.load_client_connect_config( config_file=str(config_file) ) print(f"Loaded 'default' profile from {config_file}.") print(f" Address: {connect_config.get('target_host')}") print(f" Namespace: {connect_config.get('namespace')}") print(f" gRPC Metadata: {connect_config.get('rpc_metadata')}") print("\nAttempting to connect to client...") try: await Client.connect(**connect_config) # type: ignore print("✅ Client connected successfully!") except Exception as e: print(f"❌ Failed to connect: {e}") if __name__ == "__main__": asyncio.run(main()) ``` Use the `envconfig` package to set connection options for the Temporal Client using environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](/references/client-environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. Set the following environment variables before running your Python application. Replace the placeholder values with your actual configuration. Since this is for a local development Temporal Service, the values connect to `localhost:7233` and the `default` Namespace. You may omit these variables entirely since they're the defaults. ```bash export TEMPORAL_NAMESPACE="default" export TEMPORAL_ADDRESS="localhost:7233" ``` After setting the environment variables, you can create a Temporal Client as follows: ```python {11,19} from pathlib import Path from temporalio.client import Client from temporalio.envconfig import ClientConfig async def main(): # load_client_connect_config is a helper that loads a profile and prepares # the config dictionary for Client.connect. By default, it loads the # "default" profile. connect_config = ClientConfig.load_client_connect_config() print(f" Address: {connect_config.get('target_host')}") print(f" Namespace: {connect_config.get('namespace')}") print(f" gRPC Metadata: {connect_config.get('rpc_metadata')}") print("\nAttempting to connect to client...") try: await Client.connect(**connect_config) # type: ignore print("✅ Client connected successfully!") except Exception as e: print(f"❌ Failed to connect: {e}") if __name__ == "__main__": asyncio.run(main()) ``` If you don't want to use environment variables or a configuration file, you can specify connection options directly in code. This is convenient for local development and testing. You can also load a base configuration from environment variables or a configuration file, and then override specific options in code. Use the `connect()` method on the `Client` class to create and connect to a Temporal Client to the Temporal Service. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def main(): client = await Client.connect("localhost:7233") result = await client.execute_workflow( YourWorkflow.run, "your name", id="your-workflow-id", task_queue="your-task-queue", ) print(f"Result: {result}") if __name__ == "__main__": asyncio.run(main()) ``` ## Connect to Temporal Cloud {#connect-to-temporal-cloud} You can connect to Temporal Cloud using either an [API key](/cloud/api-keys) or through mTLS. Connection to Temporal Cloud or any secured Temporal Service requires additional connection options compared to connecting to an unsecured local development instance: - Your credentials for authentication. - If you are using an API key, provide the API key value. - If you are using mTLS, provide the mTLS CA certificate and mTLS private key. - Your _Namespace and Account ID_ combination, which follows the format `.`. - The _endpoint_ may vary. The most common endpoint used is the gRPC regional endpoint, which follows the format: `..api.temporal.io:7233`. - For Namespaces with High Availability features with API key authentication enabled, use the gRPC Namespace endpoint: `..tmprl.cloud:7233`. This allows automated failover without needing to switch endpoints. You can find the Namespace and Account ID, as well as the endpoint, on the Namespaces tab: ![The Namespace and Account ID combination on the left, and the regional endpoint on the right](/img/cloud/apikeys/namespaces-and-regional-endpoints.png) You can provide these connection options using environment variables, a configuration file, or directly in code. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. For a list of all available configuration options you can set in the TOML file, refer to [Environment Configuration](/references/client-environment-configuration). You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the path to the configuration file, the SDK looks for it at the default path `~/.config/temporalio/temporal.toml`. :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines a `staging` profile with the necessary connection options to connect to Temporal Cloud via an API key: For example, the following TOML configuration file defines a `staging` profile with the necessary connection options to connect to Temporal Cloud via an API key: ```toml --- # Cloud profile for Temporal Cloud [profile.staging] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" ``` If you want to use mTLS authentication instead of an API key, replace the `api_key` field with your mTLS certificate and private key: ```toml --- # Cloud profile for Temporal Cloud [profile.staging] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" tls_client_cert_data = "your-tls-client-cert-data" tls_client_key_path = "your-tls-client-key-path" ``` With the connections options defined in the configuration file, use the [`connect` method](https://python.temporal.io/temporalio.client.Client.html#connect) on the `Client` class to create a Temporal Client using the `staging` profile as follows. After loading the profile, you can also programmatically override specific connection options before creating the client. ```python {14,23-25} from pathlib import Path from temporalio.client import Client from temporalio.envconfig import ClientConfig async def main(): """ Demonstrates loading a named profile and overriding values programmatically. """ print("--- Loading 'staging' profile with programmatic overrides ---") config_file = Path(__file__).parent / "config.toml" profile_name = "staging" print( "The 'staging' profile in config.toml has an incorrect address (localhost:9999)." ) print("We'll programmatically override it to the correct address.") # Load the 'staging' profile. connect_config = ClientConfig.load_client_connect_config( profile=profile_name, config_file=str(config_file), ) # Override the target host to the correct address. # This is the recommended way to override configuration values. connect_config["target_host"] = "localhost:7233" print(f"\nLoaded '{profile_name}' profile from {config_file} with overrides.") print( f" Address: {connect_config.get('target_host')} (overridden from localhost:9999)" ) print(f" Namespace: {connect_config.get('namespace')}") print("\nAttempting to connect to client...") try: await Client.connect(**connect_config) # type: ignore print("✅ Client connected successfully!") except Exception as e: print(f"❌ Failed to connect: {e}") if __name__ == "__main__": asyncio.run(main()) ``` The following environment variables are required to connect to Temporal Cloud: - `TEMPORAL_NAMESPACE`: Your Namespace and Account ID combination in the format `.`. - `TEMPORAL_ADDRESS`: The gRPC endpoint for your Temporal Cloud Namespace. - `TEMPORAL_API_KEY`: Your API key value. Required if you are using API key authentication. - `TEMPORAL_TLS_CLIENT_CERT_DATA` or `TEMPORAL_TLS_CLIENT_CERT_PATH`: Your mTLS client certificate data or file path. Required if you are using mTLS authentication. - `TEMPORAL_TLS_CLIENT_KEY_DATA` or `TEMPORAL_TLS_CLIENT_KEY_PATH`: Your mTLS client private key data or file path. Required if you are using mTLS authentication. Ensure these environment variables exist in your environment before running your Python application. Import the `EnvConfig` package to set connection options for the Temporal Client using environment variables. The `MustLoadDefaultClientOptions` function will automatically load all environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](../environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. After setting the environment variables, use the following code to create the Temporal Client: ```python {11, 19} from pathlib import Path from temporalio.client import Client from temporalio.envconfig import ClientConfig async def main(): # load_client_connect_config is a helper that loads a profile and prepares # the config dictionary for Client.connect. By default, it loads the # "default" profile. connect_config = ClientConfig.load_client_connect_config() print(f" Address: {connect_config.get('target_host')}") print(f" Namespace: {connect_config.get('namespace')}") print(f" gRPC Metadata: {connect_config.get('rpc_metadata')}") print("\nAttempting to connect to client...") try: await Client.connect(**connect_config) # type: ignore print("✅ Client connected successfully!") except Exception as e: print(f"❌ Failed to connect: {e}") if __name__ == "__main__": asyncio.run(main()) ``` You can also specify connection options directly in code to connect to Temporal Cloud. To create an initial connection, provide the endpoint, Namespace and Account ID combination, and API key values to the `Client.connect` method. ```python client = await Client.connect( , namespace=., api_key=, tls=True, ) ``` To connect using mTLS instead of an API key, provide the mTLS certificate and private key as follows: View the source code {' '} in the context of the rest of the application code. ```python from temporalio.client import Client, TLSConfig --- # ... async def main(): with open("client-cert.pem", "rb") as f: client_cert = f.read() with open("client-private-key.pem", "rb") as f: client_private_key = f.read() client = await Client.connect( "your-custom-namespace.tmprl.cloud:7233", namespace=".", tls=TLSConfig( client_cert=client_cert, client_private_key=client_private_key, # domain=domain, # TLS domain # server_root_ca_cert=server_root_ca_cert, # ROOT CA to validate the server cert ), ) ``` For more information about managing and generating client certificates for Temporal Cloud, see [How to manage certificates in Temporal Cloud](/cloud/certificates). For more information about configuring TLS to secure inter- and intra-network communication for a Temporal Service, see [Temporal Customization Samples](https://github.com/temporalio/samples-server). ## Start a Workflow Execution {#start-workflow-execution} **How to start a Workflow Execution using the Python SDK** [Workflow Execution](/workflow-execution) semantics rely on several parameters—that is, to start a Workflow Execution you must supply a Task Queue that will be used for the Tasks (one that a Worker is polling), the Workflow Type, language-specific contextual data, and Workflow Function parameters. In the examples below, all Workflow Executions are started using a Temporal Client. To spawn Workflow Executions from within another Workflow Execution, use either the [Child Workflow](/develop/python/child-workflows) or External Workflow APIs. See the [Customize Workflow Type](/develop/python/core-application#workflow-type) section to see how to customize the name of the Workflow Type. A request to spawn a Workflow Execution causes the Temporal Service to create the first Event ([WorkflowExecutionStarted](/references/events#workflowexecutionstarted)) in the Workflow Execution Event History. The Temporal Service then creates the first Workflow Task, resulting in the first [WorkflowTaskScheduled](/references/events#workflowtaskscheduled) Event. To start a Workflow Execution in Python, use either the [`start_workflow()`](https://python.temporal.io/temporalio.client.Client.html#start_workflow) or [`execute_workflow()`](https://python.temporal.io/temporalio.client.Client.html#execute_workflow) asynchronous methods in the Client. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def main(): client = await Client.connect("localhost:7233") result = await client.execute_workflow( YourWorkflow.run, "your name", id="your-workflow-id", task_queue="your-task-queue", ) print(f"Result: {result}") if __name__ == "__main__": asyncio.run(main()) ``` ### Set a Workflow's Task Queue {#set-task-queue} **How to set a Workflow's Task Queue using the Python SDK** In most SDKs, the only Workflow Option that must be set is the name of the [Task Queue](/task-queue). For any code to execute, a Worker Process must be running that contains a Worker Entity that is polling the same Task Queue name. To set a Task Queue in Python, specify the `task_queue` argument when executing a Workflow with either [`start_workflow()`](https://python.temporal.io/temporalio.client.Client.html#start_workflow) or [`execute_workflow()`](https://python.temporal.io/temporalio.client.Client.html#execute_workflow) methods. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def main(): client = await Client.connect("localhost:7233") result = await client.execute_workflow( YourWorkflow.run, "your name", id="your-workflow-id", task_queue="your-task-queue", ) print(f"Result: {result}") if __name__ == "__main__": asyncio.run(main()) ``` ### Set a Workflow Id {#workflow-id} **How to set a Workflow Id using the Python SDK** You must set a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). When setting a Workflow Id, we recommended mapping it to a business process or business entity identifier, such as an order identifier or customer identifier. To set a Workflow Id in Python, specify the `id` argument when executing a Workflow with either [`start_workflow()`](https://python.temporal.io/temporalio.client.Client.html#start_workflow) or [`execute_workflow()`](https://python.temporal.io/temporalio.client.Client.html#execute_workflow) methods. The `id` argument should be a unique identifier for the Workflow Execution. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def main(): client = await Client.connect("localhost:7233") result = await client.execute_workflow( YourWorkflow.run, "your name", id="your-workflow-id", task_queue="your-task-queue", ) print(f"Result: {result}") if __name__ == "__main__": asyncio.run(main()) ``` ### Get the results of a Workflow Execution {#get-workflow-results} **How to get the results of a Workflow Execution using the Python SDK** If the call to start a Workflow Execution is successful, you will gain access to the Workflow Execution's Run Id. The Workflow Id, Run Id, and Namespace may be used to uniquely identify a Workflow Execution in the system and get its result. It's possible to both block progress on the result (synchronous execution) or get the result at some other point in time (asynchronous execution). In the Temporal Platform, it's also acceptable to use Queries as the preferred method for accessing the state and results of Workflow Executions. Use [`start_workflow()`](https://python.temporal.io/temporalio.client.Client.html#start_workflow) or [`get_workflow_handle()`](https://python.temporal.io/temporalio.client.Client.html#get_workflow_handle) to return a Workflow handle. Then use the [`result`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#result) method to await on the result of the Workflow. To get a handle for an existing Workflow by its Id, you can use [`get_workflow_handle()`](https://python.temporal.io/temporalio.client.Client.html#get_workflow_handle), or use [`get_workflow_handle_for()`](https://python.temporal.io/temporalio.client.Client.html#get_workflow_handle_for) for type safety. Then use [`describe()`](https://python.temporal.io/temporalio.client.WorkflowHandle.html#describe) to get the current status of the Workflow. If the Workflow does not exist, this call fails. View the source code {' '} in the context of the rest of the application code. ```python --- # ... async def main(): client = await Client.connect("localhost:7233") handle = client.get_workflow_handle( workflow_id="your-workflow-id", ) results = await handle.result() print(f"Result: {results}") if __name__ == "__main__": asyncio.run(main()) ``` --- ## Temporal Nexus - Python SDK Feature Guide :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Python SDK support for Nexus is at [Public Preview](/evaluate/development-production-features/release-stages#public-preview). Features in public preview may undergo further development and testing before they are made Generally Available. These features are being refined and are recommended for production usage. The APIs may undergo changes; however, Temporal's goal is to maintain backward compatibility. ::: Use [Temporal Nexus](/evaluate/nexus) to connect Temporal Applications within and across Namespaces using a Nexus Endpoint, a Nexus Service contract, and Nexus Operations. This page shows how to do the following: - [Run a development Temporal Service with Nexus enabled](#run-the-temporal-nexus-development-server) - [Create caller and handler Namespaces](#create-caller-handler-namespaces) - [Create a Nexus Endpoint to route requests from caller to handler](#create-nexus-endpoint) - [Define the Nexus Service contract](#define-nexus-service-contract) - [Develop a Nexus Service and Operation handlers](#develop-nexus-service-operation-handlers) - [Develop a caller Workflow that uses a Nexus Service](#develop-caller-workflow-nexus-service) - [Understand exceptions in Nexus Operations](#exceptions-in-nexus-operations) - [Cancel a Nexus Operation](#canceling-a-nexus-operation) - [Make Nexus calls across Namespaces in Temporal Cloud](#nexus-calls-across-namespaces-temporal-cloud) :::note This documentation uses source code derived from the [Python Nexus sample](https://github.com/temporalio/samples-python/tree/main/hello_nexus). ::: ## Run the Temporal Development Server with Nexus enabled {#run-the-temporal-nexus-development-server} Prerequisites: - [Install the latest Temporal CLI](https://learn.temporal.io/getting_started/python/dev_environment/#set-up-a-local-temporal-service-for-development-with-temporal-cli) (`v1.3.0` or higher recommended) - [Install the latest Temporal Python SDK](https://learn.temporal.io/getting_started/python/dev_environment/#add-temporal-python-sdk-dependencies) (`v1.14.1` or higher) The first step in working with Temporal Nexus involves starting a Temporal Server with Nexus enabled. ``` temporal server start-dev ``` This command automatically starts the Temporal development server with the Web UI, and creates the `default` Namespace. It uses an in-memory database, so do not use it for real use cases. The Temporal Web UI should now be accessible at [http://localhost:8233](http://localhost:8233), and the Temporal Server should now be available for client connections on `localhost:7233`. ## Create caller and handler Namespaces {#create-caller-handler-namespaces} Before setting up Nexus endpoints, create separate Namespaces for the caller and handler. ``` temporal operator namespace create --namespace my-target-namespace temporal operator namespace create --namespace my-caller-namespace ``` For this example, `my-target-namespace` will contain the Nexus Operation handler, and you will use a Workflow in `my-caller-namespace` to call that Operation handler. We use different namespaces to demonstrate cross-Namespace Nexus calls. ## Create a Nexus Endpoint to route requests from caller to handler {#create-nexus-endpoint} After establishing caller and handler Namespaces, the next step is to create a Nexus Endpoint to route requests. ``` temporal operator nexus endpoint create \ --name my-nexus-endpoint-name \ --target-namespace my-target-namespace \ --target-task-queue my-handler-task-queue ``` You can also use the Web UI to create the Namespaces and Nexus endpoint. ## Define the Nexus Service contract {#define-nexus-service-contract} Defining a clear contract for the Nexus Service is crucial for smooth communication. In this example, there is a service package that describes the Service and Operation names along with input/output types for caller Workflows to use the Nexus Endpoint. Each [Temporal SDK includes and uses a default Data Converter](https://docs.temporal.io/dataconversion). The default data converter encodes payloads in the following order: Null, Byte array, Protobuf JSON, and JSON. In a polyglot environment, that is where more than one language and SDK is being used to develop a Temporal solution, Protobuf and JSON are common choices. This example uses Python dataclasses serialized into JSON. [hello_nexus/service.py](https://github.com/temporalio/samples-python/blob/main/hello_nexus/service.py) ```python from dataclasses import dataclass @dataclass class MyInput: name: str @dataclass class MyOutput: message: str @nexusrpc.service class MyNexusService: my_sync_operation: nexusrpc.Operation[MyInput, MyOutput] my_workflow_run_operation: nexusrpc.Operation[MyInput, MyOutput] ``` ## Develop a Nexus Service handler and Operation handlers {#develop-nexus-service-operation-handlers} Nexus Operation handlers are typically defined in the same Worker as the underlying Temporal primitives they abstract. Operation handlers can decide if a given Nexus Operation will be synchronous or asynchronous. They can execute arbitrary code, and invoke underlying Temporal primitives such as a Workflow, Query, Signal, or Update. The `nexusrpc.handler` and `temporalio.nexus` modules have utilities to help create Nexus Operations: - `nexusrpc.handler.sync_operation` - Create a synchronous operation handler - `nexus.workflow_run_operation` - Create an asynchronous operation handler that starts a Workflow ### Develop a Synchronous Nexus Operation handler The `@nexusrpc.handler.sync_operation` decorator is for exposing simple RPC handlers. [hello_nexus/handler/service_handler.py](https://github.com/temporalio/samples-python/blob/main/hello_nexus/handler/service_handler.py) ```python @nexusrpc.handler.service_handler(service=MyNexusService) class MyNexusServiceHandler: @nexusrpc.handler.sync_operation async def my_sync_operation( self, ctx: nexusrpc.handler.StartOperationContext, input: MyInput ) -> MyOutput: return MyOutput(message=f"Hello {input.name} from sync operation!") ``` A synchronous operation handler must return quickly (less than `10s`). In addition to performing simple CPU-bound computations such as the one above, implementations are free to make arbitrary calls to other services or databases. The handler function can access an SDK client that can be used to execute Signals, Updates, or Queries against a Workflow, or to do other client operations such as listing Workflows. The [nexus_sync_operations](https://github.com/temporalio/samples-python/blob/main/nexus_sync_operations) sample shows how to create a Nexus Service uses synchronous operations to send Updates and Queries: [nexus_sync_operations/handler/service_handler.py](https://github.com/temporalio/samples-python/blob/main/nexus_sync_operations/handler/service_handler.py) ```python from temporalio import nexus @nexusrpc.handler.service_handler(service=GreetingService) class GreetingServiceHandler: @property def greeting_workflow_handle(self) -> WorkflowHandle[GreetingWorkflow, str]: return nexus.client().get_workflow_handle_for( GreetingWorkflow.run, self.workflow_id ) ... ``` In addition to `nexus.client()`, you can use `nexus.info()` to access information about the currently-executing Nexus Operation including its Task Queue. ### Develop an Asynchronous Nexus Operation handler to start a Workflow Use the `@nexus.workflow_run_operation` decorator, which is the easiest way to expose a Workflow as an operation. [hello_nexus/handler/service_handler.py](https://github.com/temporalio/samples-python/blob/main/hello_nexus/handler/service_handler.py) ```python from temporalio import nexus @nexusrpc.handler.service_handler(service=MyNexusService) class MyNexusServiceHandler: @nexus.workflow_run_operation async def my_workflow_run_operation( self, ctx: nexus.WorkflowRunOperationContext, input: MyInput ) -> nexus.WorkflowHandle[MyOutput]: return await ctx.start_workflow( WorkflowStartedByNexusOperation.run, input, id=str(uuid.uuid4()), ) ``` Workflow IDs should typically be business-meaningful IDs and are used to dedupe Workflow starts. In general, the ID should be passed in the Operation input as part of the Nexus Service contract. :::tip RESOURCES [Attach multiple Nexus callers to a handler Workflow](/nexus/operations#attaching-multiple-nexus-callers) with a Conflict-Policy of Use-Existing. ::: #### Map a Nexus Operation input to multiple Workflow arguments A Nexus Operation can only take one input parameter. If you want a Nexus Operation to start a Workflow that takes multiple arguments use the `ctx.start_workflow` method. [nexus_multiple_args/handler/service_handler.py](https://github.com/temporalio/samples-python/blob/main/nexus_multiple_args/handler/service_handler.py) ```py @nexusrpc.handler.service_handler(service=MyNexusService) class MyNexusServiceHandler: """ Service handler that demonstrates multiple argument handling in Nexus operations. """ # This is a nexus operation that is backed by a Temporal workflow. # The key feature here is that it demonstrates how to map a single input object # (HelloInput) to a workflow that takes multiple individual arguments. @nexus.workflow_run_operation async def hello( self, ctx: nexus.WorkflowRunOperationContext, input: HelloInput ) -> nexus.WorkflowHandle[HelloOutput]: """ Start a workflow with multiple arguments unpacked from the input object. """ return await ctx.start_workflow( HelloHandlerWorkflow.run, args=[ input.name, # First argument: name input.language, # Second argument: language ], id=str(uuid.uuid4()), ) ``` ### Register your Nexus Service handler in a Worker After developing an asynchronous Nexus Operation handler to start a Workflow, the next step is to register your Nexus Service handler in a Worker. At this stage you can pass any arguments you need to your service handler's `__init__` method. [hello_nexus/handler/worker.py](https://github.com/temporalio/samples-python/blob/main/hello_nexus/handler/worker.py) ```python async def main(): client = await Client.connect("localhost:7233", namespace=NAMESPACE) worker = Worker( client, task_queue=TASK_QUEUE, workflows=[WorkflowStartedByNexusOperation], nexus_service_handlers=[MyNexusServiceHandler()], ) await worker.run() ``` ## Develop a caller Workflow that uses the Nexus Service {#develop-caller-workflow-nexus-service} To execute a Nexus Operation from the caller Workflow, import the necessary service definition and operation input/output types: [hello_nexus/caller/workflows.py](https://github.com/temporalio/samples-python/blob/main/hello_nexus/caller/workflows.py) ```python from temporalio import workflow with workflow.unsafe.imports_passed_through(): from hello_nexus.service import MyInput, MyNexusService, MyOutput @workflow.defn class CallerWorkflow: @workflow.run async def run(self, name: str) -> tuple[MyOutput, MyOutput]: nexus_client = workflow.create_nexus_client( service=MyNexusService, endpoint=NEXUS_ENDPOINT, ) # Start the nexus operation and wait for the result in one go, using execute_operation. wf_result = await nexus_client.execute_operation( MyNexusService.my_workflow_run_operation, MyInput(name), ) # Alternatively, you can use start_operation to obtain the operation handle and # then `await` the handle to obtain the result. sync_operation_handle = await self.nexus_client.start_operation( MyNexusService.my_sync_operation, MyInput(name), ) sync_result = await sync_operation_handle return sync_result, wf_result ``` ### Register the caller Workflow in a Worker and start the caller Workflow After developing the caller Workflow, the next step is to register it with a Worker. Finally, the caller Workflow must be started using `client.start_workflow()` or `client.execute_workflow()` These steps are the same as for any normal Workflow. The Python sample combines them in a single application. See [hello_nexus/caller/app.py](https://github.com/temporalio/samples-python/blob/main/hello_nexus/caller/app.py) for reference. ## Exceptions in Nexus operations {#exceptions-in-nexus-operations} Temporal provides general guidance on [Errors in Nexus operations](https://docs.temporal.io/references/failures#errors-in-nexus-operations). In Python, there are three Nexus-specific exception classes: - [`nexusrpc.OperationError`](https://nexus-rpc.github.io/sdk-python/nexusrpc.OperationError.html): this is the exception type you should raise in a Nexus operation to indicate that it has failed according to its own application logic and should not be retried. - [`nexusrpc.HandlerError`](https://nexus-rpc.github.io/sdk-python/nexusrpc.HandlerError.html): you can raise this exception type in a Nexus operation with a specific [HandlerErrorType](https://nexus-rpc.github.io/sdk-python/nexusrpc.HandlerErrorType.html). The error will be marked retryable or non-retryable according to the type, following the [Nexus spec](https://github.com/nexus-rpc/api/blob/main/SPEC.md#predefined-handler-errors). The non-retryable handler error types are `BAD_REQUEST`, `UNAUTHENTICATED`, `UNAUTHORIZED`, `NOT_FOUND`, `NOT_IMPLEMENTED`; the retryable types are `RESOURCE_EXHAUSTED`, `INTERNAL`, `UNAVAILABLE`, `UPSTREAM_TIMEOUT`. - [`temporalio.exceptions.NexusOperationError`](https://python.temporal.io/temporalio.exceptions.NexusOperationError.html): this is the error raised inside a Workflow when a Nexus operation fails for any reason. Use the `__cause__` attribute on the exception to access the cause chain. ## Canceling a Nexus Operation {#canceling-a-nexus-operation} To cancel a Nexus Operation from within a Workflow, call `handle.cancel()` on the operation handle. Only asynchronous operations can be canceled in Nexus, since cancellation is sent using an operation token. The Workflow or other resources backing the operation may choose to ignore the cancellation request. If ignored, the operation may enter a terminal state. Once the caller Workflow completes, the caller's Nexus Machinery will not make any further attempts to cancel operations that are still running. It's okay to leave operations running in some use cases. To ensure cancellations are delivered, wait for all pending operations to finish before exiting the Workflow. ## Make Nexus calls across Namespaces in Temporal Cloud {#nexus-calls-across-namespaces-temporal-cloud} This section assumes you are already familiar with how to connect a Worker to Temporal Cloud. The `tcld` CLI is used to create Namespaces and the Nexus Endpoint, and mTLS client certificates will be used to securely connect the caller and handler Workers to their respective Temporal Cloud Namespaces. ### Install the latest `tcld` CLI and generate certificates To install the latest version of the `tcld` CLI, run the following command (on macOS): ``` brew install temporalio/brew/tcld ``` If you don't already have certificates, you can generate them for mTLS Worker authentication using the command below: ``` tcld gen ca --org $YOUR_ORG_NAME --validity-period 1y --ca-cert ca.pem --ca-key ca.key ``` These certificates will be valid for one year. ### Create caller and handler Namespaces Before deploying to Temporal Cloud, ensure that the appropriate Namespaces are created for both the caller and handler. If you already have these Namespaces, you don't need to do this. ``` tcld login tcld namespace create \ --namespace \ --cloud-provider aws \ --region us-west-2 \ --ca-certificate-file 'path/to/your/ca.pem' \ --retention-days 1 tcld namespace create \ --namespace \ --cloud-provider aws \ --region us-west-2 \ --ca-certificate-file 'path/to/your/ca.pem' \ --retention-days 1 ``` Alternatively, you can create Namespaces through the UI: [https://cloud.temporal.io/Namespaces](https://cloud.temporal.io/Namespaces). ### Create a Nexus Endpoint to route requests from caller to handler To create a Nexus Endpoint you must have a Developer account role or higher, and have NamespaceAdmin permission on the `--target-namespace`. ``` tcld nexus endpoint create \ --name \ --target-task-queue my-handler-task-queue \ --target-namespace \ --allow-namespace \ --description-file hello_nexus/endpoint_description.md ``` The `--allow-namespace` is used to build an Endpoint allowlist of caller Namespaces that can use the Nexus Endpoint, as described in Runtime Access Control. Alternatively, you can create a Nexus Endpoint through the UI: [https://cloud.temporal.io/nexus](https://cloud.temporal.io/nexus). ## Observability ### Web UI A synchronous Nexus Operation will surface in the caller Workflow as follows, with just `NexusOperationScheduled` and `NexusOperationCompleted` events in the caller's Workflow history: An asynchronous Nexus Operation will surface in the caller Workflow as follows, with `NexusOperationScheduled`, `NexusOperationStarted`, and `NexusOperationCompleted`, in the caller's Workflow history: ### Temporal CLI Use the `workflow describe` command to show pending Nexus Operations in the caller Workflow and any attached callbacks on the handler Workflow: ``` temporal workflow describe -w ``` Nexus events are included in the caller's Workflow history: ``` temporal workflow show -w ``` For **asynchronous Nexus Operations** the following are reported in the caller's history: - `NexusOperationScheduled` - `NexusOperationStarted` - `NexusOperationCompleted` For **synchronous Nexus Operations** the following are reported in the caller's history: - `NexusOperationScheduled` - `NexusOperationCompleted` :::note `NexusOperationStarted` isn't reported in the caller's history for synchronous operations. ::: ## Learn more - Read the high-level description of the [Temporal Nexus feature](/evaluate/nexus) and watch the [Nexus keynote and demo](https://youtu.be/qqc2vsv1mrU?feature=shared&t=2082). - Learn how Nexus works in the [Nexus deep dive talk](https://www.youtube.com/watch?v=izR9dQ_eIe4) and [Encyclopedia](/nexus). - Deploy Nexus Endpoints in production with [Temporal Cloud](/cloud/nexus). --- ## Testing - Python SDK The Testing section of the Temporal Application development guide describes the frameworks that facilitate Workflow and integration testing. In the context of Temporal, you can create these types of automated tests: - **End-to-end:** Running a Temporal Server and Worker with all its Workflows and Activities; starting and interacting with Workflows from a Client. - **Integration:** Anything between end-to-end and unit testing. - Running Activities with mocked Context and other SDK imports (and usually network requests). - Running Workers with mock Activities, and using a Client to start Workflows. - Running Workflows with mocked SDK imports. - **Unit:** Running a piece of Workflow or Activity code (a function or method) and mocking any code it calls. We generally recommend writing the majority of your tests as integration tests. Because the test server supports skipping time, use the test server for both end-to-end and integration tests with Workers. ## Test frameworks {#test-frameworks} Some SDKs have support or examples for popular test frameworks, runners, or libraries. One recommended framework for testing in Python for the Temporal SDK is [pytest](https://docs.pytest.org/), which can help with fixtures to stand up and tear down test environments, provide useful test discovery, and make it easy to write parameterized tests. ## Testing Activities {#test-activities} An Activity can be tested with a mock Activity environment, which provides a way to mock the Activity context, listen to Heartbeats, and cancel the Activity. This behavior allows you to test the Activity in isolation by calling it directly, without needing to create a Worker to run the Activity. ### Run an Activity {#run-an-activity} If an Activity references its context, you need to mock that context when testing in isolation. To run an Activity in a test, use the [`ActivityEnvironment`](https://python.temporal.io/temporalio.testing.ActivityEnvironment.html) class. This class allows you to run any callable inside an Activity context. Use it to test the behavior of your code under various conditions. ### Listen to Heartbeats {#listen-to-heartbeats} When an Activity sends a Heartbeat, be sure that you can see the Heartbeats in your test code so that you can verify them. To test a Heartbeat in an Activity, use the [`on_heartbeat()`](https://python.temporal.io/temporalio.testing.ActivityEnvironment.html#on_heartbeat) property of the [`ActivityEnvironment`](https://python.temporal.io/temporalio.testing.ActivityEnvironment.html) class. This property sets a custom function that is called every time the `activity.heartbeat()` function is called within the Activity. ```python @activity.defn async def activity_with_heartbeats(param: str): activity.heartbeat(f"param: {param}") activity.heartbeat("second heartbeat") env = ActivityEnvironment() heartbeats = [] --- # Set the `on_heartbeat` property to a callback function that will be called for each Heartbeat sent by the Activity. env.on_heartbeat = lambda *args: heartbeats.append(args[0]) --- # Use the run method to start the Activity, passing in the function that contains the Heartbeats and any necessary parameters. await env.run(activity_with_heartbeats, "test") --- # Verify that the expected Heartbeats are received by the callback function. assert heartbeats == ["param: test", "second heartbeat"] ``` ## Testing Workflows {#test-workflows} ### How to mock Activities {#mock-activities} Mock the Activity invocation when unit testing your Workflows. When integration testing Workflows with a Worker, you can mock Activities by providing mock Activity implementations to the Worker. Provide mock Activity implementations to the Worker. ```python from temporalio.client import Client from temporalio.worker import Worker --- # Import your Activity Definition and real implementation from hello.hello_activity import ( ComposeGreetingInput, GreetingWorkflow, compose_greeting, ) --- # Define your mocked Activity implementation @activity.defn(name="compose_greeting") async def compose_greeting_mocked(input: ComposeGreetingInput) -> str: return f"{input.greeting}, {input.name} from mocked activity!" async def test_mock_activity(client: Client): task_queue_name = str(uuid.uuid4()) # Provide the mocked Activity implementation to the Worker async with Worker( client, task_queue=task_queue_name, workflows=[GreetingWorkflow], activities=[compose_greeting_mocked], ): # Execute your Workflow as usual assert "Hello, World from mocked activity!" == await client.execute_workflow( GreetingWorkflow.run, "World", id=str(uuid.uuid4()), task_queue=task_queue_name, ) ``` The mocked Activity implementation should have the same signature as the real implementation (including the input and output types) and the same name. When the Workflow invokes the Activity, it invokes the mocked implementation instead of the real one, allowing you to test your Workflow isolated. ### How to skip time {#skip-time} Some long-running Workflows can persist for months or even years. Implementing the test framework allows your Workflow code to skip time and complete your tests in seconds rather than the Workflow's specified amount. For example, if you have a Workflow sleep for a day, or have an Activity failure with a long retry interval, you don't need to wait the entire length of the sleep period to test whether the sleep function works. Instead, test the logic that happens after the sleep by skipping forward in time and complete your tests in a timely manner. The test framework included in most SDKs is an in-memory implementation of Temporal Server that supports skipping time. Time is a global property of an instance of `TestWorkflowEnvironment`: skipping time (either automatically or manually) applies to all currently running tests. If you need different time behaviors for different tests, run your tests in a series or with separate instances of the test server. For example, you could run all tests with automatic time skipping in parallel, and then all tests with manual time skipping in series, and then all tests without time skipping in parallel. #### Skip time automatically {#automatic-method} You can skip time automatically in the SDK of your choice. Start a test server process that skips time as needed. For example, in the time-skipping mode, Timers, which include sleeps and conditional timeouts, are fast-forwarded except when Activities are running. Use the [`start_time_skipping()`](https://python.temporal.io/temporalio.testing.WorkflowEnvironment.html#start_time_skipping) method to start a test server process and skip time automatically. Use the [`start_local()`](https://python.temporal.io/temporalio.testing.WorkflowEnvironment.html#start_local) method for a full local Temporal Server. Use the [`from_client()`](https://python.temporal.io/temporalio.testing.WorkflowEnvironment.html#from_client) method for an existing Temporal Server. #### Skip time manually {#manual-method} Skip time manually in the SDK of your choice. To implement time skipping, use the [`start_time_skipping()`](https://python.temporal.io/temporalio.testing.WorkflowEnvironment.html#start_time_skipping) static method. ```python from temporalio.testing import WorkflowEnvironment async def test_manual_time_skipping(): async with await WorkflowEnvironment.start_time_skipping() as env: # Your code here # You can use the env.sleep(seconds) method to manually advance time await env.sleep(3) # This will advance time by 3 seconds # Your code here ``` ### Assert in Workflow {#assert-in-workflow} The `assert` statement is a convenient way to insert debugging assertions into the Workflow context. The `assert` method is available in Python and TypeScript. For information about assert statements in Python, see [`assert`](https://docs.python.org/3/reference/simple_stmts.html#the-assert-statement) in the Python Language Reference. ## How to Replay a Workflow Execution {#replay} Replay recreates the exact state of a Workflow Execution. You can replay a Workflow from the beginning of its Event History. Replay succeeds only if the [Workflow Definition](/workflow-definition) is compatible with the provided history from a deterministic point of view. When you test changes to your Workflow Definitions, we recommend doing the following as part of your CI checks: 1. Determine which Workflow Types or Task Queues (or both) will be targeted by the Worker code under test. 2. Download the Event Histories of a representative set of recent open and closed Workflows from each Task Queue, either programmatically using the SDK client or via the Temporal CLI. 3. Run the Event Histories through replay. 4. Fail CI if any error is encountered during replay. The following are examples of fetching and replaying Event Histories: To replay Workflow Executions, use the [`replay_workflows`](https://python.temporal.io/temporalio.worker.Replayer.html#replay_workflows) or [`replay_workflow`](https://python.temporal.io/temporalio.worker.Replayer.html#replay_workflow) methods, passing one or more Event Histories as arguments. In the following example (which, as of server v1.18, requires Advanced Visibility to be enabled), Event Histories are downloaded from the server and then replayed. If any replay fails, the code raises an exception. ```python workflows = client.list_workflows(f"TaskQueue=foo and StartTime > '2022-01-01T12:00:00'") histories = workflows.map_histories() replayer = Replayer( workflows=[MyWorkflowA, MyWorkflowB, MyWorkflowC] ) await replayer.replay_workflows(histories) ``` In the next example, a single history is loaded from a JSON string: ```python replayer = Replayer(workflows=[YourWorkflow]) await replayer.replay_workflow(WorkflowHistory.from_json(history_json_str)) ``` In both examples, if Event History is non-deterministic, an error is thrown. You can choose to wait until all histories have been replayed with `replay_workflows` by setting the `fail_fast` option to `false`. :::note If the Workflow History is exported by [Temporal Web UI](/web-ui) or through [Temporal CLI](/cli), you can pass the JSON file history object as a JSON string or as a Python dictionary through the `json.load()` function, which takes a file object and returns the JSON object. :::tip When fetching event histories directly from the server or exporting them, be aware that the data can be protobuf-encoded (`bytes`). The `Replayer`, however, often works with decoded histories (like a `dict`). If you encounter `TypeError` exceptions related to `dict` vs. `bytes` mismatches during replay, ensure your event history is properly decoded before passing it to the Replayer. ::: --- ## Durable Timers - Python SDK A Workflow can set a durable Timer for a fixed time period. In some SDKs, the function is called `sleep()`, and in others, it's called `timer()`. A Workflow can sleep for months. Timers are persisted, so even if your Worker or Temporal Service is down when the time period completes, as soon as your Worker and Temporal Service are back up, the `sleep()` call will resolve and your code will continue executing. Sleeping is a resource-light operation: it does not tie up the process, and you can run millions of Timers off a single Worker. To set a Timer in Python, call the [`asyncio.sleep()`](https://docs.python.org/3/library/asyncio-task.html#sleeping) function and pass the duration in seconds you want to wait before continuing. View the source code {' '} in the context of the rest of the application code. ```python --- # ... await asyncio.sleep(10) ``` --- ## Versioning - Python SDK Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: - [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). The Worker Versioning feature allows you to tag your Workers and programmatically roll them out in versioned deployments, so that old Workers can run old code paths and new Workers can run new code paths. - [Versioning with Patching](#patching). This method works by adding branches to your code tied to specific revisions. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. :::danger Support for the experimental Worker Versioning method before 2025 will be removed from Temporal Server in March 2026. Refer to the [latest Worker Versioning docs](/worker-versioning) for guidance. You can still refer to the [Worker Versioning Legacy](worker-versioning-legacy) docs if needed. ::: ## Worker Versioning Temporal's [Worker Versioning](/production-deployment/worker-deployments/worker-versioning) feature allows you to tag your Workers and programmatically roll them out in Deployment Versions, so that old Workers can run old code paths and new Workers can run new code paths. This way, you can pin your Workflows to specific revisions, avoiding the need for patching. ## Versioning with Patching {#patching} ### Adding a patch A Patch defines a logical branch in a Workflow for a specific change, similar to a feature flag. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. When you want to make substantive code changes that may affect existing Workflow Executions, create a patch. Note that there's no need to patch [Pinned Workflows](/worker-versioning). Suppose you have an initial Workflow version called `pre_patch_activity`: View the source code {' '} in the context of the rest of the application code. ```python from datetime import timedelta from temporalio import workflow with workflow.unsafe.imports_passed_through(): from activities import pre_patch_activity --- # ... @workflow.defn class MyWorkflow: @workflow.run async def run(self) -> None: self._result = await workflow.execute_activity( pre_patch_activity, schedule_to_close_timeout=timedelta(minutes=5), ) ``` Now, you want to update your code to run `post_patch_activity` instead. This represents your desired end state. View the source code {' '} in the context of the rest of the application code. ```python from datetime import timedelta from temporalio import workflow with workflow.unsafe.imports_passed_through(): from activities import post_patch_activity --- # ... @workflow.defn class MyWorkflow: @workflow.run async def run(self) -> None: self._result = await workflow.execute_activity( post_patch_activity, schedule_to_close_timeout=timedelta(minutes=5), ) ``` The problem is that you cannot deploy `post_patch_activity` directly until you're certain there are no more running Workflows created using the `pre_patch_activity` code, otherwise you are likely to cause a nondeterminism error. Instead, you'll need to deploy `post_patched_activity` and use the [patched](https://python.temporal.io/temporalio.workflow.html#patched) function to determine which version of the code to execute. Patching is a three-step process: 1. Patch in any new, updated code using the `patched()` function. Run the new patched code alongside old code. 2. Remove old code and use `deprecate_patch()` to mark a particular patch as deprecated. 3. Once there are no longer any open Worklow Executions of the previous version of the code, remove `deprecate_patch()`. Let's walk through this process in sequence. ### Patching in new code {#using-patched-for-workflow-history-markers} Using `patched` inserts a marker into the Workflow History. During Replay, if a Worker encounters a history with that marker, it will fail the Workflow task when the Workflow code doesn't produce the same patch marker (in this case, `my-patch`). This ensures you can safely deploy code from `post_patch_activity` as a "feature flag" alongside the original version (`pre_patch_activity`). View the source code {' '} in the context of the rest of the application code. ```python --- # ... @workflow.defn class MyWorkflow: @workflow.run async def run(self) -> None: if workflow.patched("my-patch"): self._result = await workflow.execute_activity( post_patch_activity, schedule_to_close_timeout=timedelta(minutes=5), ) else: self._result = await workflow.execute_activity( pre_patch_activity, schedule_to_close_timeout=timedelta(minutes=5), ) ``` ### Deprecating patches {#deprecated-patches} After ensuring that all Workflows started with `pre_patch_activity` code have left retention, you can [deprecate the patch](https://python.temporal.io/temporalio.workflow.html#deprecate_patch). Once your Workflows are no longer running the pre-patch code paths, you can deploy your code with `deprecate_patch()`. These Workers will be running the most up-to-date version of the Workflow code, which no longer requires the patch. Deprecated patches serve as a bridge between the final stage of the patching process and the final state that no longer has patches. They function similarly to regular patches by adding a marker to the Workflow History. However, this marker won't cause a replay failure when the Workflow code doesn't produce it. View the source code {' '} in the context of the rest of the application code. ```python --- # ... @workflow.defn class MyWorkflow: @workflow.run async def run(self) -> None: workflow.deprecate_patch("my-patch") self._result = await workflow.execute_activity( post_patch_activity, schedule_to_close_timeout=timedelta(minutes=5), ) ``` ### Removing a patch {#deploy-new-code} Once your pre-patch Workflows have left retention, you can then safely deploy Workers that no longer use either the `patched()` or `deprecate_patch()` calls: View the source code {' '} in the context of the rest of the application code. ```python --- # ... @workflow.defn class MyWorkflow: @workflow.run async def run(self) -> None: self._result = await workflow.execute_activity( post_patch_activity, schedule_to_close_timeout=timedelta(minutes=5), ) ``` Patching allows you to make changes to currently running Workflows. It is a powerful method for introducing compatible changes without introducing non-determinism errors. ### Detailed Description of the Patched Function This video provides an overview of how the `patched()` function works: For a more in-depth explanation, refer to the [Patching](/patching) Encyclopedia entry. ### Workflow cutovers To understand why Patching is useful, it's helpful to demonstrate cutting over an entire Workflow. Since incompatible changes only affect open Workflow Executions of the same type, you can avoid determinism errors by creating a whole new Workflow when making changes. To do this, you can copy the Workflow Definition function, giving it a different name, and register both names with your Workers. For example, you would duplicate `PizzaWorkflow` as `PizzaWorkflowV2`: ```python @workflow.defn(name="PizzaWorkflow") class PizzaWorkflow: @workflow.run async def run(self, name: str) -> str: # this function contains the original code @workflow.defn(name="PizzaWorkflowV2") class PizzaWorkflowV2: @workflow.run async def run(self, name: str) -> str: # this function contains the updated code ``` You would then need to update the Worker configuration, and any other identifier strings, to register both Workflow Types: ```python worker = Worker( client, task_queue="your-task-queue", workflows=[PizzaWorkflow, PizzaWorkflowV2], ) ``` The downside of this method is that it requires you to duplicate code and to update any commands used to start the Workflow. This can become impractical over time. This method also does not provide a way to version any still-running Workflows -- it is essentially just a cutover, unlike Patching. ### Testing a Workflow for replay safety To determine whether your Workflow your needs a patch, or that you've patched it successfully, you should incorporate [Replay Testing](/develop/python/testing-suite#replay). --- ## Worker Versioning (Legacy) - Python SDK ## (Deprecated) How to use Worker Versioning in Python {#worker-versioning} :::caution This section is for a deprecated Worker Versioning API. Please redirect your attention to [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). See the [Pre-release README](https://github.com/temporalio/temporal/blob/main/docs/worker-versioning.md) for more information. ::: A Build ID corresponds to a deployment. If you don't already have one, we recommend a hash of the code--such as a Git SHA--combined with a human-readable timestamp. To use Worker Versioning, you need to pass a Build ID to your Java Worker and opt in to Worker Versioning. ### Assign a Build ID to your Worker and opt in to Worker Versioning You should understand assignment rules before completing this step. See the [Worker Versioning Pre-release README](https://github.com/temporalio/temporal/blob/main/docs/worker-versioning.md) for more information. To enable Worker Versioning for your Worker, assign the Build ID--perhaps from an environment variable--and turn it on. ```python --- # ... worker = Worker( task_queue="your_task_queue_name", build_id=build_id, use_worker_versioning=True, # ... register workflows & activities, etc ) --- # ... ``` :::warning Importantly, when you start this Worker, it won't receive any tasks until you set up assignment rules. ::: ### Specify versions for Activities, Child Workflows, and Continue-as-New Workflows :::caution This section is for a deprecated Worker Versioning API. Please redirect your attention to [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). ::: By default, Activities, Child Workflows, and Continue-as-New Workflows are run on the build of the workflow that created them if they are also configured to run on the same Task Queue. When configured to run on a separate Task Queue, they will default to using the current assignment rules. If you want to override this behavior, you can specify your intent via the `versioning_intent` argument available on the methods you use to invoke these commands. For example, if you want an Activity to use the latest assignment rules rather than inheriting from its parent: ```python --- # ... await workflow.execute_activity( say_hello, "hi", versioning_intent=VersioningIntent.USE_ASSIGNMENT_RULES, start_to_close_timeout=timedelta(seconds=5), ) --- # ... ``` ### Tell the Task Queue about your Worker's Build ID (Deprecated) :::caution This section is for a deprecated Worker Versioning API. Please redirect your attention to [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). ::: Now you can use the SDK (or the Temporal CLI) to tell the Task Queue about your Worker's Build ID. You might want to do this as part of your CI deployment process. ```python --- # ... await client.update_worker_build_id_compatibility( "your_task_queue_name", BuildIdOpAddNewDefault("deadbeef") ) ``` This code adds the `deadbeef` Build ID to the Task Queue as the sole version in a new version set, which becomes the default for the queue. New Workflows execute on Workers with this Build ID, and existing ones will continue to process by appropriately compatible Workers. If, instead, you want to add the Build ID to an existing compatible set, you can do this: ```python --- # ... await client.update_worker_build_id_compatibility( "your_task_queue_name", BuildIdOpAddNewCompatible("deadbeef", "some-existing-build-id") ) ``` This code adds `deadbeef` to the existing compatible set containing `some-existing-build-id` and marks it as the new default Build ID for that set. You can also promote an existing Build ID in a set to be the default for that set: ```python --- # ... await client.update_worker_build_id_compatibility( "your_task_queue_name", BuildIdOpPromoteBuildIdWithinSet("deadbeef") ) ``` You can also promote an entire set to become the default set for the queue. New Workflows will start using that set's default build. ```python --- # ... await client.update_worker_build_id_compatibility( "your_task_queue_name", BuildIdOpPromoteSetByBuildId("deadbeef") ) ``` --- ## Asynchronous Activity completion - Ruby SDK ## How to asynchronously complete an Activity {#asynchronous-activity-completion} This page describes how to asynchronously complete an Activity. [Asynchronous Activity Completion](/activity-execution#asynchronous-activity-completion) enables the Activity Function to return without the Activity Execution completing. There are three steps to follow: 1. The Activity provides the external system with identifying information needed to complete the Activity Execution. Identifying information can be a [Task Token](/activity-execution#task-token), or a combination of Namespace, Workflow Id, and Activity Id. 2. The Activity Function completes in a way that identifies it as waiting to be completed by an external system. 3. The Temporal Client is used to Heartbeat and complete the Activity. To mark an Activity as completing asynchronously, do the following inside the Activity. ```ruby --- # Capture token for later completion captured_token = Temporalio::Activity::Context.current.info.task_token --- # Raise a special exception that says an activity will be completed somewhere else raise Temporalio::Activity::CompleteAsyncError ``` To update an Activity outside the Activity, use the [async_activity_handle](https://ruby.temporal.io/Temporalio/Client.html#async_activity_handle-instance_method) method on the client to get the handle of the Activity. ```ruby handle = my_client.async_activity_handle(captured_token) ``` Then, on that handle, you can use `heartbeat`, `complete`, `fail`, or `report_cancellation` methods to update the Activity. ```ruby handle.complete('completion value') ``` --- ## Benign exceptions - Ruby SDK **How to mark an Activity error as benign using the Temporal Ruby SDK** When Activities throw errors that are expected or not severe, they can create noise in your logs, metrics, and OpenTelemetry traces, making it harder to identify real issues. By marking these errors as benign, you can exclude them from your observability data while still handling them in your Workflow logic. To mark an error as benign, set the `category` parameter to `Temporalio::Error::ApplicationError::Category::BENIGN` when raising an `ApplicationError`. Benign errors: - Have Activity failure logs downgraded to DEBUG level - Do not emit Activity failure metrics - Do not set the OpenTelemetry failure status to ERROR ```ruby require 'temporalio/activity' class MyActivity < Temporalio::Activity::Definition def execute begin call_external_service rescue StandardError => e # Mark this error as benign since it's expected raise Temporalio::Error::ApplicationError.new( e.message, category: Temporalio::Error::ApplicationError::Category::BENIGN ) end end end ``` Use benign exceptions for Activity errors that occur regularly as part of normal operations, such as polling an external service that isn't ready yet, or handling expected transient failures that will be retried. --- ## Child Workflows - Ruby SDK This page shows how to do the following: - [Start a Child Workflow Execution](#child-workflows) using the Ruby SDK - [Set a Parent Close Policy](#parent-close-policy) using the Ruby SDK ## Start a Child Workflow Execution {#child-workflows} A [Child Workflow Execution](/child-workflows) is a Workflow Execution that is scheduled from within another Workflow using a Child Workflow API. When using a Child Workflow API, Child Workflow related Events ([StartChildWorkflowExecutionInitiated](/references/events#startchildworkflowexecutioninitiated), [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted), [ChildWorkflowExecutionCompleted](/references/events#childworkflowexecutioncompleted), etc...) are logged in the Workflow Execution Event History. Always block progress until the [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted) Event is logged to the Event History to ensure the Child Workflow Execution has started. After that, Child Workflow Executions may be abandoned using the _Abandon_ [Parent Close Policy](/parent-close-policy) set in the Child Workflow Options. To spawn a Child Workflow Execution in Ruby, use the `execute_child_workflow` method which starts the Child Workflow and waits for completion or use the `start_child_workflow` method to start a Child Workflow and return its handle. This is useful if you want to do something after it has only started, or to get the Workflow/Run ID, or to be able to signal it while running. :::note `execute_child_workflow` is a helper method for `start_child_workflow(...).result`. ::: ```ruby Temporalio::Workflow.execute_child_workflow(MyChildWorkflow, 'my-workflow-arg') ``` ## Set a Parent Close Policy {#parent-close-policy} A [Parent Close Policy](/parent-close-policy) determines what happens to a Child Workflow Execution if its Parent changes to a Closed status (Completed, Failed, or Timed Out). The default Parent Close Policy option is set to terminate the Child Workflow Execution. Set the `parent_close_policy` parameter for `execute_child_workflow` or `start_child_workflow` to specify the behavior of the Child Workflow when the Parent Workflow closes. ```ruby Temporalio::Workflow.execute_child_workflow( MyChildWorkflow, 'my-workflow-arg', parent_close_policy: Temporalio::Workflow::ParentClosePolicy::ABANDON ) ``` --- ## Continue-As-New - Ruby SDK This page describes how to Continue-As-New using the Temporal Ruby SDK. [Continue-As-New](/workflow-execution/continue-as-new) enables a Workflow Execution to close successfully and create a new Workflow Execution in a single atomic operation if the number of Events in the Event History is becoming too large. The Workflow Execution spawned from the use of Continue-As-New has the same Workflow Id, a new Run Id, and a fresh Event History and is passed all the appropriate parameters. :::caution As a precautionary measure, the Workflow Execution's Event History is limited to [51,200 Events](https://github.com/temporalio/temporal/blob/e3496b1c51bfaaae8142b78e4032cc791de8a76f/service/history/configs/config.go#L382) or [50 MB](https://github.com/temporalio/temporal/blob/e3496b1c51bfaaae8142b78e4032cc791de8a76f/service/history/configs/config.go#L380) and will warn you after 10,240 Events or 10 MB. ::: To prevent a Workflow Execution Event History from exceeding this limit and failing, use Continue-As-New to start a new Workflow Execution with a fresh Event History. A very large Event History can adversely affect the performance of a Workflow Execution. For example, in the case of a Workflow Worker failure, the full Event History must be pulled from the Temporal Service and given to another Worker via a Workflow Task. If the Event history is very large, it may take some time to load it. The Continue-As-New feature enables developers to complete the current Workflow Execution and start a new one atomically. The new Workflow Execution has the same Workflow Id, but a different Run Id, and has its own Event History. ## Continue-As-New in Ruby {#continue-as-new} To Continue-As-New in Ruby, raise a `Temporalio::Workflow::ContinueAsNewError` from inside your Workflow, which will stop the Workflow immediately and Continue-As-New. ```ruby raise Temporalio::Workflow::ContinueAsNewError.new('my-new-arg') ``` :::warning Using Continue-as-New and Updates - Temporal _does not_ support Continue-as-New functionality within Update handlers. - Complete all handlers _before_ using Continue-as-New. - Use Continue-as-New from your main Workflow Definition method, just as you would complete or fail a Workflow Execution. ::: --- ## Converters and encryption - Ruby SDK Temporal's security model is designed around client-side encryption of Payloads. A client may encrypt Payloads before sending them to the server, and decrypt them after receiving them from the server. This provides a high degree of confidentiality because the Temporal Server itself has absolutely no knowledge of the actual data. It also gives implementers more power and more freedom regarding which client is able to read which data -- they can control access with keys, algorithms, or other security measures. A Temporal developer adds client-side encryption of Payloads by providing a Custom Payload Codec to its Client. Depending on business needs, a complete implementation of Payload Encryption may involve selecting appropriate encryption algorithms, managing encryption keys, restricting a subset of their users from viewing payload output, or a combination of these. The server itself never adds encryption over Payloads. Therefore, unless client-side encryption is implemented, Payload data will be persisted in non-encrypted form to the data store, and any Client that can make requests to a Temporal namespace (including the Temporal UI and CLI) will be able to read Payloads contained in Workflows. When working with sensitive data, you should always implement Payload encryption. ## Custom Payload Codec {#custom-payload-codec} Custom Data Converters can change the default Temporal Data Conversion behavior by adding hooks, sending payloads to external storage, or performing different encoding steps. If you only need to change the encoding performed on your payloads -- by adding compression or encryption -- you can override the default Data Converter to use a new `PayloadCodec`. The Payload Codec needs to extend `Temporalio::Converters::PayloadCodec` and implement `encode` and `decode` methods. These should convert the given payloads as needed into new payloads, using the `"encoding"` metadata field. Do not mutate the existing payloads. Here is an example of an encryption codec that just uses base64 in each direction: ```ruby class Base64Codec < Temporalio::Converters::PayloadCodec def encode(payloads) payloads.map do |p| Temporalio::Api::Common::V1::Payload.new( # Set our specific encoding. We may also want to add a key ID in here for use by # the decode side metadata: { 'encoding' => 'binary/my-payload-encoding' }, data: Base64.strict_encode64(p.to_proto) ) end end def decode(payloads) payloads.map do |p| # Ignore if it doesn't have our expected encoding next p unless p.metadata['encoding'] == 'binary/my-payload-encoding' Temporalio::Api::Common::V1::Payload.decode( Base64.strict_decode64(p.data) ) end end end ``` **Set Data Converter to use custom Payload Codec** When creating a client, the default `DataConverter` can be updated with the payload codec like so: ```ruby my_client = Temporalio::Client.connect( 'localhost:7233', 'my-namespace', data_converter: Temporalio::Converters::DataConverter.new(payload_codec: Base64Codec.new) ) ``` - Data **encoding** is performed by the client using the converters and codecs provided by Temporal or your custom implementation when passing input to the Temporal Cluster. For example, plain text input is usually serialized into a JSON object, and can then be compressed or encrypted. - Data **decoding** may be performed by your application logic during your Workflows or Activities as necessary, but decoded Workflow results are never persisted back to the Temporal Cluster. Instead, they are stored encoded on the Cluster, and you need to provide an additional parameter when using the [temporal workflow show](/cli/workflow#show) command or when browsing the Web UI to view output. ### Using a Codec Server A Codec Server is an HTTP server that uses your custom Codec logic to decode your data remotely. The Codec Server is independent of the Temporal Cluster and decodes your encrypted payloads through predefined endpoints. You create, operate, and manage access to your Codec Server in your own environment. The Temporal CLI and the Web UI in turn provide built-in hooks to call the Codec Server to decode encrypted payloads on demand. Refer to the [Codec Server](/production-deployment/data-encryption) documentation for information on how to design and deploy a Codec Server. ## Payload conversion {#custom-payload-converter} Temporal SDKs provide a default [Payload Converter](/payload-converter) that can be customized to convert a custom data type to [Payload](/dataconversion#payload) and back. ### Conversion sequence {#conversion-sequence} The order in which your encoding Payload Converters are applied depend on the order given to the Data Converter. You can set multiple encoding Payload Converters to run your conversions. When the Data Converter receives a value for conversion, it passes through each Payload Converter in sequence until the converter that handles the data type does the conversion. Payload Converters can be customized independently of a Payload Codec. Temporal's Converter architecture looks like this: ### Supported Data Types {#supported-data-types} Data converters are used to convert raw Temporal payloads to/from actual Ruby types. A custom data converter can be set via the `data_converter` keyword argument when creating a client. Data converters are a combination of payload converters, payload codecs, and failure converters. Payload converters convert Ruby values to/from serialized bytes. Payload codecs convert bytes to bytes (e.g. for compression or encryption). Failure converters convert exceptions to/from serialized failures. Data converters are in the `Temporalio::Converters` module. The default data converter uses a default payload converter, which supports the following types: - `nil` - "bytes" (i.e. `String` with `Encoding::ASCII_8BIT` encoding) - `Google::Protobuf::MessageExts` instances - [JSON module](https://docs.ruby-lang.org/en/master/JSON.html) for everything else This means that normal Ruby objects will use `JSON.generate` when serializing and `JSON.parse` when deserializing (with `create_additions: true` set by default). So a Ruby object will often appear as a hash when deserialized. Also, hashes that are passed in with symbol keys end up with string keys when deserialized. While "JSON Additions" are supported, it is not cross-SDK-language compatible since this is a Ruby-specific construct. The default payload converter is a collection of "encoding payload converters". On serialize, each encoding converter will be tried in order until one accepts (default falls through to the JSON one). The encoding converter sets an `encoding` metadata value which is used to know which converter to use on deserialize. Custom encoding converters can be created, or even the entire payload converter can be replaced with a different implementation. **NOTE:** For ActiveRecord, or other general/ORM models that are used for a different purpose, it is not recommended to try to reuse them as Temporal models. Eventually model purposes diverge and models for a Temporal workflows/activities should be specific to their use for clarity and compatibility reasons. Also many Ruby ORMs do many lazy things and therefore provide unclear serialization semantics. Instead, consider having models specific for workflows/activities and translate to/from existing models as needed. See the next section on how to do this with ActiveModel objects. #### ActiveModel {#active-model} By default, ActiveModel objects do not natively support the `JSON` module. A mixin can be created to add this support for ActiveModel, for example: ```ruby module ActiveModelJSONSupport extend ActiveSupport::Concern include ActiveModel::Serializers::JSON included do def as_json(*) super.merge(::JSON.create_id => self.class.name) end def to_json(*args) as_json.to_json(*args) end def self.json_create(object) object = object.dup object.delete(::JSON.create_id) new(**object.symbolize_keys) end end end ``` Now if `include ActiveModelJSONSupport` is present on any ActiveModel class, on serialization `to_json` will be used which will use `as_json` which calls the super `as_json` but also includes the fully qualified class name as the JSON `create_id` key. On deserialization, Ruby JSON then uses this key to know what class to call `json_create` on. --- ## Core application - Ruby SDK This page shows how to do the following: - [Develop a basic Workflow Definition](#develop-workflow) - [Develop a basic Activity Definition](#develop-activity) - [Start an Activity from a Workflow](#activity-execution) - [Run a Worker Process](#run-worker-process) - [Set a Dynamic Workflow](#set-a-dynamic-workflow) - [Set a Dynamic Activity](#set-a-dynamic-activity) ## Develop a Workflow {#develop-workflow} Workflows are the fundamental unit of a Temporal Application, and it all starts with the development of a [Workflow Definition](/workflow-definition). In the Temporal Ruby SDK programming model, Workflows are defined as classes. Have the Workflow class extend `Temporalio::Workflow::Definition` to define a Workflow. The entrypoint is the `execute` method. ```ruby class MyWorkflow < Temporalio::Workflow::Definition def execute(name) Temporalio::Workflow.execute_activity( MyActivity, { greeting: 'Hello', name: }, start_to_close_timeout: 100 ) end end ``` Temporal Workflows may have any number of custom parameters. However, we strongly recommend that hashes or objects are used as parameters, so that the object's individual fields may be altered without breaking the signature of the Workflow. ### Workflow Logic Requirements {#workflow-logic-requirements} Temporal Workflows [must be deterministic](https://docs.temporal.io/workflows#deterministic-constraints), which includes Ruby workflows. This means there are several things workflows cannot do such as: - Perform IO (network, disk, stdio, etc) - Access/alter external mutable state - Do any threading - Do anything using the system clock (e.g. `Time.Now`) - Make any random calls - Make any not-guaranteed-deterministic calls To prevent illegal workflow calls, a call tracer is put on the workflow thread that raises an exception if any illegal calls are made. Which calls are illegal is configurable in the worker options. ### Customize Workflow Type {#workflow-type} Workflows have a Type that are referred to as the Workflow name. The following examples demonstrate how to set a custom name for your Workflow Type. You can customize the Workflow name with a custom name in a `workflow_name` class method call on the class. The Workflow name defaults to the unqualified class name. ```ruby class MyWorkflow < Temporalio::Workflow::Definition # Customize the name workflow_name :MyDifferentWorkflowName def execute(name) Temporalio::Workflow.execute_activity( MyActivity, { greeting: 'Hello', name: }, start_to_close_timeout: 100 ) end end ``` ## Develop an Activity {#develop-activity} One of the primary things that Workflows do is orchestrate the execution of Activities. An Activity is a normal method execution that's intended to execute a single, well-defined action (either short or long-running), such as querying a database, calling a third-party API, or transcoding a media file. An Activity can interact with world outside the Temporal Platform or use a Temporal Client to interact with a Temporal Service. For the Workflow to be able to execute the Activity, we must define the [Activity Definition](/activity-definition). You can develop an Activity Definition by creating a class that extends `Temporalio::Activity::Definition`. To register a class as an Activity with a custom name, use the `activity_name` class method in the class definition. Otherwise, the activity name is the unqualified class name. ```ruby class MyActivity < Temporalio::Activity::Definition def execute(input) "#{input['greeting']}, #{input['name']}!" end end ``` Activity implementation code should be _idempotent_. Learn more about [idempotency](/activity-definition#idempotency). There is no explicit limit to the total number of parameters that an [Activity Definition](/activity-definition) may support. However, there is a limit to the total size of the data that ends up encoded into a gRPC message Payload. A single argument is limited to a maximum size of 2 MB. And the total size of a gRPC message, which includes all the arguments, is limited to a maximum of 4 MB. Some SDKs require that you pass context objects, others do not. When it comes to your application data—that is, data that is serialized and encoded into a Payload—we recommend that you use a single hash or object as an argument that wraps the application data passed to Activities. This is so that you can change what data is passed to the Activity without breaking a method signature. The `execute` method in your Activity can technically accept multiple parameters of any data type that Temporal can convert. However, Temporal strongly encourages using a single parameter object to simplify versioning and maintainability. ### Activity Concurrency and Executors {#activity-concurrency-and-executors} :::note This section covers advanced concurrency and execution options that most users will not need when getting started. ::: By default, activities run in the "thread pool executor" (i.e. `Temporalio::Worker::ActivityExecutor::ThreadPool`). This default is shared across all workers and is a naive thread pool that continually makes threads as needed when none are idle/available to handle incoming work. If a thread sits idle long enough, it will be killed. The maximum number of concurrent activities a worker will run at a time is configured via its `tuner` option. The default is `Temporalio::Worker::Tuner.create_fixed` which defaults to 100 activities at a time for that worker. When this value is reached, the worker will stop asking for work from the server until there are slots available again. In addition to the thread pool executor, there is also a fiber executor in the default executor set. To use fibers, call `activity_executor :fiber` class method at the top of the activity class (the default of this value is `:default` which is the thread pool executor). Activities can only choose the fiber executor if the worker has been created and run in a fiber, but thread pool executor is always available. Currently due to [an issue](https://github.com/temporalio/sdk-ruby/issues/162), workers can only run in a fiber on Ruby versions 3.3 and newer. ## Start Activity Execution {#activity-execution} Calls to spawn [Activity Executions](/activity-execution) are written within a [Workflow Definition](/workflow-definition). The call to spawn an Activity Execution generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command. This results in the set of three [Activity Task](/tasks#activity-task) related Events ([ActivityTaskScheduled](/references/events#activitytaskscheduled), [ActivityTaskStarted](/references/events#activitytaskstarted), and `ActivityTask[Closed]`)in your Workflow Execution Event History. The values passed to Activities through invocation parameters or returned through a result value are recorded in the Execution history. The entire Execution history is transferred from the Temporal service to Workflow Workers when a Workflow state needs to recover. A large Execution history can thus adversely impact the performance of your Workflow. Therefore, be mindful of the amount of data you transfer through Activity invocation parameters or Return Values. Otherwise, no additional limitations exist on Activity implementations. To spawn an Activity Execution, use the `execute_activity` operation from within your Workflow Definition. ```ruby class MyWorkflow < Temporalio::Workflow::Definition # Customize the name workflow_name :MyDifferentWorkflowName def execute(name) Temporalio::Workflow.execute_activity( MyActivity, { greeting: 'Hello', name: }, start_to_close_timeout: 100 ) end end ``` Activity Execution semantics rely on several parameters. The only required value that needs to be set is either a [Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout) or a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout). These values are set as keyword parameters. The Activity result is the returned from the `execute_activity` call. ## Workflow Futures {#workflow-futures} `Temporalio::Workflow::Future` can be used for running things in the background or concurrently. Temporal provides Workflow-safe wrappers around some core language features in cases like these. `Temporalio::Workflow::Future` is a safe wrapper around `Fiber.schedule` for running multiple Activities at once. The Ruby SDK also provides `Workflow.wait_condition` for awaiting a result. Futures are never used implicitly, but they work with all Workflow code and constructs. For instance, to run 3 activities and wait for them all to complete, something like this can be written: ```ruby --- # Start 3 activities in background fut1 = Temporalio::Workflow::Future.new do Temporalio::Workflow.execute_activity(MyActivity1, schedule_to_close_timeout: 300) end fut2 = Temporalio::Workflow::Future.new do Temporalio::Workflow.execute_activity(MyActivity2, schedule_to_close_timeout: 300) end fut3 = Temporalio::Workflow::Future.new do Temporalio::Workflow.execute_activity(MyActivity3, schedule_to_close_timeout: 300) end --- # Wait for them all to complete Temporalio::Workflow::Future.all_of(fut1, fut2, fut3).wait Temporalio::Workflow.logger.info("Got: #{fut1.result}, #{fut2.result}, #{fut3.result}") ``` Or, say, to wait on the first of 5 activities or a timeout to complete: ```ruby --- # Start 5 activities act_futs = 5.times.map do |i| Temporalio::Workflow::Future.new do Temporalio::Workflow.execute_activity(MyActivity, "my-arg-#{i}", schedule_to_close_timeout: 300) end end --- # Start a timer sleep_fut = Temporalio::Workflow::Future.new { Temporalio::Workflow.sleep(30) } --- # Wait for first act result or sleep fut act_result = Temporalio::Workflow::Future.any_of(sleep_fut, *act_futs).wait --- # Fail if timer done first raise Temporalio::Error::ApplicationError, 'Timer expired' if sleep_fut.done? --- # Print act result otherwise Temporalio::Workflow.logger.info("Act result: #{act_result}") ``` There are several other details not covered here about futures, such as how exceptions are handled, how to use a setter proc instead of a block, etc. See the [API documentation](https://ruby.temporal.io/Temporalio/Workflow/Future.html) for details. ## Run Worker Process {#run-worker-process} The [Worker Process](/workers#worker-process) is where Workflow Functions and Activity Functions are actually executed. In a Temporal application deployment, you ship and scale as many Workers as you need to handle the load of your Workflows and Activities. - Each [Worker Entity](/workers#worker-entity) in the Worker Process must register the exact Workflow Types and Activity Types it may execute. - Each Worker Entity must also associate itself with exactly one [Task Queue](/task-queue). - Each Worker Entity polling the same Task Queue must be registered with the same Workflow Types and Activity Types. A [Worker Entity](/workers#worker-entity) is the component within a Worker Process that listens to a specific Task Queue. A Worker Entity contains a Workflow Worker and/or an Activity Worker, which makes progress on Workflow Executions and Activity Executions, respectively. Workers are implemented in each Temporal SDK, and can be deployed with just a bit of boilerplate. To create a Worker, use `Temporalio::Worker.new()`, providing the Worker options which include Task Queue, Workflows, and Activities and more. The following code example creates a Worker that polls for tasks from the Task Queue and executes the Workflow. When a Worker is created, it accepts a list of Workflows, a list of Activities, or both. ```ruby --- # Create a client to localhost on default namespace client = Temporalio::Client.connect('localhost:7233', 'default') --- # Create a worker with the client, activities, and workflows worker = Temporalio::Worker.new( client:, task_queue: 'my-task-queue', workflows: [MyWorkflow], # This provides the activity instance which means it is reused for each attempt, but # just the class can be provided to instantiate for each attempt activities: [MyActivity.new] ) --- # Run the worker until SIGINT. There are other ways to wait for shutdown, or a block can --- # be provided that will shutdown when the block completes worker.run(shutdown_signals: ['SIGINT']) ``` To run multiple workers, `Temporalio::Worker.run_all` may be used instead. All Workers listening to the same Task Queue name must be registered to handle the exact same Workflows Types and Activity Types. If a Worker polls a Task for a Workflow Type or Activity Type it does not know about, it fails that Task. However, the failure of the Task does not cause the associated Workflow Execution to fail. ## Set a Dynamic Workflow {#set-a-dynamic-workflow} A Dynamic Workflow in Temporal is a Workflow that is invoked dynamically at runtime if no other Workflow with the same name is registered. A Workflow can be made dynamic by invoking `workflow_dynamic` class method at the top of the definition. You must register the Workflow with the Worker before it can be invoked. Only one Dynamic Workflow can be present on a Worker. Often, dynamic is used in conjunction with `workflow_raw_args` which does not convert arguments but instead passes them through as a splatted array of `Temporalio::Converters::RawValue` instances. ```ruby class MyDynamicWorkflow < Temporalio::Workflow::Definition # Make this the dynamic workflow and accept raw args workflow_dynamic workflow_raw_args def execute(*raw_args) # Require a single arg for our workflow raise Temporalio::Error::ApplicationError, 'One arg expected' unless raw_args.size == 1 # Use payload converter to convert it name = Temporalio::Workflow.payload_converter.from_payload(raw_args.first.payload) Temporalio::Workflow.execute_activity( MyActivity, { greeting: 'Hello', name: }, start_to_close_timeout: 100 ) end end ``` ## Set a Dynamic Activity {#set-a-dynamic-activity} A Dynamic Activity in Temporal is an Activity that is invoked dynamically at runtime if no other Activity with the same name is registered. An Activity can be made dynamic by invoking `activity_dynamic` class method at the top of the definition. You must register the Activity with the Worker before it can be invoked. Only one Dynamic Activity can be present on a Worker. Often, dynamic is used in conjunction with `activity_raw_args` which does not convert arguments but instead passes them through as a splatted array of `Temporalio::Converters::RawValue` instances. ```ruby class MyDynamicActivity < Temporalio::Activity::Definition # Make this the dynamic activity and accept raw args activity_dynamic activity_raw_args def execute(*raw_args) raise Temporalio::Error::ApplicationError, 'One arg expected' unless raw_args.size == 1 # Use payload converter to convert it input = Temporalio::Activity::Context.current.payload_converter.from_payload(raw_args.first.payload) "#{input['greeting']}, #{input['name']}!" end end ``` --- ## Debugging - Ruby SDK ## Debugging {#debug} This page shows how to do the following: - [Debug in a development environment](#debug-in-a-development-environment) - [Debug in a development production](#debug-in-a-development-environment) ## Debug in a development environment {#debug-in-a-development-environment} In developing Workflows, you can use the normal development tools of logging and a debugger to see what’s happening in your Workflow. In addition to the normal development tools of logging and a debugger, you can also see what’s happening in your Workflow by using the [Web UI](/web-ui) or [Temporal CLI](/cli). The Web UI provides insight into your Workflows, making it easier to identify issues and monitor the state of your Workflows in real time. ## Debug in a production environment {#debug-in-a-production-environment} For production Workflows, debugging options include: - [Web UI](/web-ui) - [Temporal CLI](/cli) - [Replay](/develop/ruby/testing-suite#replay-test) - [Tracing](/develop/ruby/observability#tracing) - [Logging](/develop/ruby/observability#logging) You can analyze Worker performance using: - [Metrics](/develop/ruby/observability#metrics) - [Worker performance guide](/develop/worker-performance) To monitor Server performance: - Use [Cloud metrics](/cloud/metrics/) if you're on Temporal Cloud - Or [self-hosted Server metrics](/self-hosted-guide/production-checklist#scaling-and-metrics) if running your own deployment --- ## Durable Timers - Ruby SDK This page describes how to set a Durable Timer using the Temporal Ruby SDK. A [Durable Timer](/workflow-execution/timers-delays) is used to pause the execution of a Workflow for a specified duration. A Workflow can sleep for days or even months. Timers are persisted, so even if your Worker or Temporal Service is down when the time period completes, as soon as your Worker and Temporal Service are back up, the Durable Timer call will resolve and your code will continue executing. Sleeping is a resource-light operation: it does not tie up the process, and you can run millions of Timers off a single Worker. To add a Timer in a Workflow, use `Temporalio::Workflow.sleep`. _Technically_ `Kernel#sleep` works, but the workflow form allows one to set a summary to view in the UI. ```ruby --- # Sleep for 72 hours Temporalio::Workflow.sleep(72 * 60 * 60, summary: 'my timer') ``` There is also a `Temporalio::Workflow.timeout` method that accepts a block and works like standard Ruby `Timeout.timeout` if needing the ability to timeout a set of code. --- ## Enriching the User Interface - Ruby SDK Temporal supports adding context to Workflows and Events with metadata. This helps users identify and understand Workflows and their operations. ## Adding Summary and Details to Workflows ### Starting a Workflow When starting a Workflow, you can provide a static summary and details to help identify the Workflow in the UI: ```ruby require 'temporalio/client' --- # Create client client = Temporalio::Client.connect('localhost:7233') --- # Start a workflow with static summary and details handle = client.start_workflow( 'YourWorkflow', 'workflow input', id: 'your-workflow-id', task_queue: 'your-task-queue', static_summary: 'Order processing for customer #12345', static_details: 'Processing premium order with expedited shipping' ) ``` `static_summary:` is a single-line description that appears in the Workflow list view, limited to 200 bytes. `static_details:` can be multi-line and provides more comprehensive information that appears in the Workflow details view, with a larger limit of 20K bytes. The input format is standard Markdown excluding images, HTML, and scripts. You can also use `execute_workflow` for synchronous execution: ```ruby --- # Execute workflow synchronously result = client.execute_workflow( 'YourWorkflow', 'workflow input', id: 'your-workflow-id', task_queue: 'your-task-queue', static_summary: 'Order processing for customer #12345', static_details: 'Processing premium order with expedited shipping' ) ``` #### Inside the Workflow Within a Workflow, you can get and set the _current workflow details_. Unlike static summary/details set at Workflow start, this value can be updated throughout the life of the Workflow. Current Workflow details also takes Markdown format (excluding images, HTML, and scripts) and can span multiple lines. ```ruby require 'temporalio' class YourWorkflow < Temporalio::Workflow::Definition def execute(input) # Get the current details current_details = Temporalio::Workflow.current_details Temporalio::Workflow.logger.info("Current details: #{current_details}") # Set/update the current details Temporalio::Workflow.current_details = 'Updated workflow details with new status' 'Workflow completed' end end ``` #### Adding Summary to Activities and Timers You can attach a `summary:` to activities when starting them from within a Workflow: ```ruby require 'temporalio' class YourWorkflow < Temporalio::Workflow::Definition def execute(input) # Execute an activity with a summary result = Temporalio::Workflow.execute_activity( 'YourActivity', input, start_to_close_timeout: 10, summary: 'Processing user data' ) result end end ``` Similarly, you can attach a `summary:` to timers within a Workflow: ```ruby require 'temporalio' class YourWorkflow < Temporalio::Workflow::Definition def execute(input) # Create a timer with a summary Temporalio::Workflow.sleep(300, summary: 'Waiting for payment confirmation') 'Timer completed' end end ``` The input format for `summary:` is a string, and limited to 200 bytes. ## Viewing Summary and Details in the UI Once you've added summaries and details to your Workflows, Activities, and Timers, you can view this enriched information in the Temporal Web UI. Navigate to your Workflow's details page to see the metadata displayed in two key locations: ### Workflow Overview Section At the top of the workflow details page, you'll find the workflow-level metadata: - **Summary & Details** - Displays the static summary and static details set when starting the workflow - **Current Details** - Displays the dynamic details that can be updated during workflow execution All Workflow details support standard Markdown formatting (excluding images, HTML, and scripts), allowing you to create rich, structured information displays. ### Event History Individual events in the Workflow's Event History display their associated summaries when available: Workflow, Activity and Timer summaries appear in purple text next to their corresponding Events, providing immediate context without requiring you to expand the event details. When you do expand an event, the summary is also prominently displayed in the detailed view. --- ## Failure detection - Ruby SDK This page shows how to do the following: - [Raise and Handle Exceptions](#exception-handling) - [Deliberately Fail Workflows](#workflow-failure) - [Set Workflow timeouts](#workflow-timeouts) - [Set Workflow retries](#workflow-retries) - [Set Activity timeouts](#activity-timeouts) - [Set Activity Retry Policy](#activity-retries) - [Heartbeat an Activity](#activity-heartbeats) - [Set Heartbeat timeouts](#heartbeat-timeout) ## Raise and Handle Exceptions {#exception-handling} In each Temporal SDK, error handling is implemented idiomatically, following the conventions of the language. Temporal uses several different error classes internally — for example, [`CancelledError`](https://ruby.temporal.io/Temporalio/Error/CanceledError.html) in the Ruby SDK, to handle a Workflow cancellation. You should not raise or otherwise implement these manually, as they are tied to Temporal platform logic. The one Temporal error class that you will typically raise deliberately is [`ApplicationError`](https://ruby.temporal.io/Temporalio/Error/ApplicationError.html). In fact, *any* other exceptions that are raised from your Ruby code in a Temporal Activity will be converted to an `ApplicationError` internally. This way, an error's type, severity, and any additional details can be sent to the Temporal Service, indexed by the Web UI, and even serialized across language boundaries. In other words, these two code samples do the same thing: ```ruby class MyError < StandardError end class SomethingThatFails < Temporalio::Activity::Definition def execute(details) Temporalio::Activity::Context.current.logger.info( "We have a problem." ) raise MyError.new('Simulated failure') end end ``` ```ruby class SomethingThatFails < Temporalio::Activity::Definition def execute(details) Temporalio::Activity::Context.current.logger.info( "We have a problem." ) raise Temporalio::Error::ApplicationError.new('Simulated failure', type: 'MyError') end end ``` Depending on your implementation, you may decide to use either method. One reason to use the Temporal `ApplicationError` class is because it allows you to set an additional `non_retryable` parameter. This way, you can decide whether an error should not be retried automatically by Temporal. This can be useful for deliberately failing a Workflow due to bad input data, rather than waiting for a timeout to elapse: ```ruby class SomethingThatFails < Temporalio::Activity::Definition def execute(details) Temporalio::Activity::Context.current.logger.info( "We have a problem." ) raise Temporalio::Error::ApplicationError.new('Simulated failure', non_retryable: true) end end ``` You can alternately specify a list of errors that are non-retryable in your Activity [Retry Policy](#activity-retries). ## Failing Workflows {#workflow-failure} One of the core design principles of Temporal is that an Activity Failure will never directly cause a Workflow Failure — a Workflow should never return as Failed unless deliberately. The default retry policy associated with Temporal Activities is to retry them until reaching a certain timeout threshold. Activities will not actually *return* a failure to your Workflow until this condition or another non-retryable condition is met. At this point, you can decide how to handle an error returned by your Activity the way you would in any other program. For example, you could implement a [Saga Pattern](https://github.com/temporalio/samples-ruby/tree/main/saga) that uses `rescue` blocks to "unwind" some of the steps your Workflow has performed up to the point of Activity Failure. **You will only fail a Workflow by manually raising an `ApplicationError` from the Workflow code.** You could do this in response to an Activity Failure, if the failure of that Activity means that your Workflow should not continue: ```ruby class SagaWorkflow < Temporalio::Workflow::Definition def execute(details) Temporalio::Workflow.execute_activity(Activities::SomethingThatFails, details,start_to_close_timeout: 30) rescue StandardError raise Temporalio::Error::ApplicationError.new('Fail the Workflow') ``` This works differently in a Workflow than raising exceptions from Activities. In an Activity, any Ruby exceptions or custom exceptions are converted to a Temporal `ApplicationError`. In a Workflow, any exceptions that are raised other than an explicit Temporal `ApplicationError` will only fail that particular [Workflow Task](https://docs.temporal.io/tasks#workflow-task-execution) and be retried. This includes any typical Ruby `RuntimeError`s that are raised automatically. These errors are treated as bugs that can be corrected with a fixed deployment, rather than a reason for a Temporal Workflow Execution to return unexpectedly. ## Workflow timeouts {#workflow-timeouts} Each Workflow timeout controls the maximum duration of a different aspect of a Workflow Execution. - **[Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout)**: Limits how long the full Workflow Execution can run. - **[Workflow Run Timeout](/encyclopedia/detecting-workflow-failures#workflow-run-timeout)**: Limits the duration of an individual run of a Workflow Execution. - **[Workflow Task Timeout](/encyclopedia/detecting-workflow-failures#workflow-task-timeout)**: Limits the time allowed for a Worker to process a Workflow Task. Set these values as keyword parameter options when starting a Workflow. ```ruby result = my_client.execute_workflow( MyWorkflow, 'some-input', id: 'my-workflow-id', task_queue: 'my-task-queue', execution_timeout: 5 * 60 ) ``` ### Workflow retries {#workflow-retries} A Retry Policy can work in cooperation with the timeouts to provide fine controls to optimize the execution experience. Use a [Retry Policy](/encyclopedia/retry-policies) to automatically retry Workflow Executions on failure. Workflow Executions do not retry by default. Workflow Executions do not retry by default, and Retry Policies should be used with Workflow Executions only in certain situations. The `retry_policy` can be set when calling `start_workflow` or `execute_workflow`. ```ruby result = my_client.execute_workflow( MyWorkflow, 'some-input', id: 'my-workflow-id', task_queue: 'my-task-queue', retry_policy: Temporalio::RetryPolicy.new(max_interval: 10) ) ``` ## Activity timeouts {#activity-timeouts} Each Activity Timeout controls a different aspect of how long an Activity Execution can take: - **[Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout)** - **[Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout)** - **[Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout)** At least one of `start_to_close_timeout` or `schedule_to_close_timeout` is required. ```ruby Temporalio::Workflow.execute_activity( MyActivity, { greeting: 'Hello', name: }, start_to_close_timeout: 5 * 60 ) ``` ### Activity Retry Policy {#activity-retries} By default, Activities use a system Retry Policy. You can override it by specifying a custom Retry Policy. To create an Activity Retry Policy in Ruby, set the `retry_policy` parameter when executing an activity. ```ruby Temporalio::Workflow.execute_activity( MyActivity, { greeting: 'Hello', name: }, start_to_close_timeout: 5 * 60, retry_policy: Temporalio::RetryPolicy.new(max_interval: 10) ) ``` ### Override the retry interval with `next_retry_delay` {#next-retry-delay} If you raise an application-level error, you can override the Retry Policy's delay by specifying a new delay. ```ruby raise Temporalio::Error::ApplicationError.new( 'Some error', type: 'SomeErrorType', next_retry_delay: 3 * Temporalio::Activity::Context.current.info.attempt ) ``` ## Heartbeat an Activity {#activity-heartbeats} A Heartbeat is a periodic signal from the Worker to the Temporal Service indicating the Activity is still alive and making progress. - Heartbeats are used to detect Worker failure. - Cancellations are delivered via Heartbeats. - Heartbeats may contain custom progress details. ```ruby class MyActivity < Temporalio::Activity::Definition def execute # This is a naive loop simulating work, but similar heartbeat logic # applies to other scenarios as well loop do # Send heartbeat Temporalio::Activity::Context.current.heartbeat # Sleep before heartbeating again sleep(3) end end end ``` ### Heartbeat Timeout {#heartbeat-timeout} The Heartbeat Timeout sets the maximum duration between Heartbeats before the Temporal Service considers the Activity failed. ```ruby Temporalio::Workflow.execute_activity( MyActivity, { greeting: 'Hello', name: }, start_to_close_timeout: 5 * 60, heartbeat_timeout: 5 ) ``` --- ## Ruby SDK developer guide ![Ruby SDK Banner](/img/assets/banner-ruby-temporal.png) :::info Ruby SDK RESOURCES Build Temporal Applications with the Ruby SDK. **Temporal Ruby Technical Resources:** - [Ruby SDK Quickstart - Setup Guide](https://docs.temporal.io/develop/ruby/set-up-local-ruby) - [Ruby SDK Code Samples](https://github.com/temporalio/samples-ruby) - [Ruby API Documentation](https://ruby.temporal.io/) - [Ruby SDK GitHub](https://github.com/temporalio/sdk-ruby) - [Temporal 101 in Ruby Free Course](https://learn.temporal.io/courses/temporal_101/ruby/) **Get Connected with the Temporal Ruby Community:** - [Temporal Ruby Community Slack](https://temporalio.slack.com/archives/C052K5QFBNW) - [Ruby SDK Forum](https://community.temporal.io/tag/ruby-sdk) ::: ## [Core Application](/develop/ruby/core-application) Use the essential components of a Temporal Application (Workflows, Activities, and Workers) to build and run a Temporal application. - [Develop a basic Workflow Definition](/develop/ruby/core-application#develop-workflow): Workflows are the fundamental unit of a Temporal Application, and it all starts with the development of a Workflow Definition. - [Develop a basic Activity Definition](/develop/ruby/core-application#develop-activity): One of the primary things that Workflows do is orchestrate the execution of Activities. - [Start an Activity from a Workflow](/develop/ruby/core-application#activity-execution): Calls to spawn Activity Executions are written within a Workflow Definition. - [Run a Worker Process](/develop/ruby/core-application#run-worker-process): The Worker Process is where Workflow Functions and Activity Functions are executed. - [Set a Dynamic Workflow](/develop/ruby/core-application#set-a-dynamic-workflow): Set a Workflow that can be invoked dynamically at runtime. - [Set a Dynamic Activity](/develop/ruby/core-application#set-a-dynamic-activity): Set an Activity that can be invoked dynamically at runtime. ## [Temporal Client](/develop/ruby/temporal-client) Connect to a Temporal Service and start a Workflow Execution. - [Create a Temporal Client](/develop/ruby/temporal-client#connect-to-development-service): Instantiate and configure a client to interact with the Temporal Service. - [Connect to Temporal Cloud](/develop/ruby/temporal-client#connect-to-temporal-cloud): Securely connect to the Temporal Cloud for a fully managed service. - [Start a Workflow](/develop/ruby/temporal-client#start-workflow): Initiate Workflows seamlessly via the Ruby SDK. - [Get Workflow results](/develop/ruby/temporal-client#get-workflow-results): Retrieve and process the results of your Workflows efficiently. ## [Testing](/develop/ruby/testing-suite) Set up the testing suite and test Workflows and Activities. - [Test frameworks](/develop/ruby/testing-suite#test-frameworks): Testing provides a framework to facilitate Workflow and integration testing. - [Testing Workflows](/develop/ruby/testing-suite#testing-workflows): Ensure the functionality and reliability of your Workflows. - [Testing Activities](/develop/ruby/testing-suite#test-activities): Validate the execution and outcomes of your Activities. - [Replay test](/develop/ruby/testing-suite#replay-test): Replay recreates the exact state of a Workflow Execution. ## [Failure detection](/develop/ruby/failure-detection) Explore how your application can detect failures using timeouts and automatically attempt to mitigate them with retries. - [Workflow timeouts](/develop/ruby/failure-detection#workflow-timeouts): Each Workflow timeout controls the maximum duration of a different aspect of a Workflow Execution. - [Workflow retries](/develop/ruby/failure-detection#workflow-retries): A Workflow Retry Policy can be used to retry a Workflow Execution in the event of a failure. - [Activity timeouts](/develop/ruby/failure-detection#activity-timeouts): Each Activity timeout controls the maximum duration of a different aspect of an Activity Execution. - [Set an Activity Retry Policy](/develop/ruby/failure-detection#activity-retries): Define retry logic for Activities to handle failures. - [Heartbeat an Activity](/develop/ruby/failure-detection#activity-heartbeats): An Activity Heartbeat is a ping from the Worker that is executing the Activity to the Temporal Service. - [Heartbeat Timeout](/develop/ruby/failure-detection#heartbeat-timeout): A Heartbeat Timeout works in conjunction with Activity Heartbeats. ## [Workflow message passing](/develop/ruby/message-passing) Send messages to and read the state of Workflow Executions. ### Signals - [Define Signal](/develop/ruby/message-passing#signals): A Signal is a message sent to a running Workflow Execution. - [Send a Signal from a Temporal Client](/develop/ruby/message-passing#send-signal-from-client): Send a Signal to a Workflow from a Temporal Client. - [Send a Signal from a Workflow](/develop/ruby/message-passing#send-signal-from-workflow): Send a Signal to another Workflow from within a Workflow, this would also be called an External Signal. - [Signal-With-Start](/develop/ruby/message-passing#signal-with-start): Start a Workflow and send it a Signal in a single operation used from the Client. - [Dynamic Handler](/develop/ruby/message-passing#dynamic-handler): Dynamic Handlers provide flexibility to handle cases where the names of Workflows, Activities, Signals, or Queries aren't known at run time. - [Set a Dynamic Signal](/develop/ruby/message-passing#set-a-dynamic-signal): A Dynamic Signal in Temporal is a Signal that is invoked dynamically at runtime if no other Signal with the same input is registered. ### Queries - [Define a Query](/develop/ruby/message-passing#queries): A Query is a synchronous operation that is used to get the state of a Workflow Execution. - [Send Queries](/develop/ruby/message-passing#send-query): Queries are sent from the Temporal Client. - [Set a Dynamic Query](/develop/ruby/message-passing#set-a-dynamic-signal): A Dynamic Query in Temporal is a Query that is invoked dynamically at runtime if no other Query with the same name is registered. ### Updates - [Define an Update](/develop/ruby/message-passing#updates): An Update is an operation that can mutate the state of a Workflow Execution and return a response. - [Send an Update](/develop/ruby/message-passing#send-update-from-client): An Update is sent from the Temporal Client. ## [Interrupt a Workflow](/develop/ruby/interrupt-workflow) Interrupt a Workflow Execution with a Cancel or Terminate action. - [Cancel a Workflow](/develop/ruby/interrupt-workflow#cancellation): Interrupt a Workflow Execution and its Activities through Workflow cancellation. - [Terminate a Workflow](/develop/ruby/interrupt-workflow#termination): Interrupt a Workflow Execution and its Activities through Workflow termination. - [Reset a Workflow](/develop/ruby/interrupt-workflow#reset): Resume a Workflow Execution from an earlier point in its Event History. ## [Asynchronous Activity completion](/develop/ruby/asynchronous-activity) Complete Activities asynchronously. - [Asynchronous Activity](/develop/ruby/asynchronous-activity): Asynchronous Activity completion enables the Activity Function to return without the Activity Execution completing. ## [Versioning](/develop/ruby/versioning) Change Workflow Definitions without causing non-deterministic behavior in running Workflows. - [Use the Ruby SDK Patching API](/develop/ruby/versioning#ruby-sdk-patching-api): Patching Workflows using the Ruby SDK. ## [Observability](/develop/ruby/observability) Configure and use the Temporal Observability APIs. - [Emit Metrics](/develop/ruby/observability#metrics): Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. - [Set up Tracing](/develop/ruby/observability#tracing): Explains how the Go SDK supports tracing and custom context propagation. - [Log from a Workflow](/develop/ruby/observability#logging): Send logs and errors to a logging service, so that when things go wrong, you can see what happened. - [Use Visibility APIs](/develop/ruby/observability#visibility): The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Terminal Service. ## [Debugging](/develop/ruby/debugging) Explore various ways to debug your application. - [Debug in a development environment](/develop/ruby/debugging#debug-in-a-development-environment): In addition to the normal development tools of logging and a debugger, you can also see what’s happening in your Workflow by using the Web UI and the Temporal CLI. - [Debug in a development production](/develop/ruby/debugging#debug-in-a-production-environment): Debug production Workflows using the Web UI, the Temporal CLI, Replays, Tracing, or Logging. ## [Schedules](/develop/ruby/schedules) Run Workflows on a schedule and delay the start of a Workflow. - [Schedule a Workflow](/develop/ruby/schedules#schedule-a-workflow) - [Create a Scheduled Workflow](/develop/ruby/schedules#create-a-workflow): Create a new schedule for a scheduled Workflow. - [Backfill a Scheduled Workflow](/develop/ruby/schedules#backfill-a-scheduled-workflow): Backfills a past time range of actions for a scheduled Workflow. - [Delete a Scheduled Workflow](/develop/ruby/schedules#delete-a-scheduled-workflow): Deletes a schedule for a scheduled Workflow. - [Describe a Scheduled Workflow](/develop/ruby/schedules#describe-a-scheduled-workflow): Get schedule configuration and current state for a scheduled Workflow. - [List a Scheduled Workflow](/develop/ruby/schedules#list-a-scheduled-workflow): List a schedule for a scheduled Workflow. - [Pause a Scheduled Workflow](/develop/ruby/schedules#pause-a-scheduled-workflow): Pause a schedule for a scheduled Workflow. - [Trigger a Scheduled Workflow](/develop/ruby/schedules#trigger-a-scheduled-workflow): Triggers an immediate action for a scheduled Workflow. - [Update a Scheduled Workflow](/develop/ruby/schedules#update-a-scheduled-workflow): Updates a schedule with a new definition for a scheduled Workflow. - [Use Start Delay](/develop/ruby/schedules#start-delay): Start delay functionality if you need to delay the execution of the Workflow without the need for regular launches. ## [Data encryption](/develop/ruby/converters-and-encryption) Use compression, encryption, and other data handling by implementing custom converters and codecs. - [Use a custom Payload Codec](/develop/ruby/converters-and-encryption#custom-payload-codec): Create a custom PayloadCodec implementation and define your encryption/compression and decryption/decompression logic. - [Use a custom Payload Converter](/develop/ruby/converters-and-encryption#custom-payload-converter): A custom data converter can be set via the `DataConverter` option when creating a client. ## [Durable Timers](/develop/ruby/durable-timers) Use Timers to make a Workflow Execution pause or "sleep" for seconds, minutes, days, months, or years. - [Sleep](/develop/ruby/durable-timers): A Timer lets a Workflow sleep for a fixed time period. ## [Child Workflows](/develop/ruby/child-workflows) Explore how to spawn a Child Workflow Execution and handle Child Workflow Events. - [Start a Child Workflow Execution](/develop/ruby/child-workflows): A Child Workflow Execution is a Workflow Execution that is scheduled from within another Workflow using a Child Workflow API. - [Set a Parent Close Policy](/develop/ruby/child-workflows#parent-close-policy): A Parent Close Policy determines what happens to a Child Workflow Execution if its Parent changes to a Closed status. ## [Continue-As-New](/develop/ruby/continue-as-new) Continue the Workflow Execution with a new Workflow Execution using the same Workflow ID. - [Continue-As-New](/develop/ruby/continue-as-new): Continue-As-New enables a Workflow Execution to close successfully and create a new Workflow Execution in a single atomic operation if the number of Events in the Event History is becoming too large. ## [Enriching the User Interface](/develop/ruby/enriching-ui) Add descriptive information to workflows and events for better visibility and context in the UI. - [Adding Context](/develop/ruby/enriching-ui) --- ## Interrupt a Workflow - Ruby SDK This page shows how to interrupt a Workflow Execution. You can interrupt a Workflow Execution in one of the following ways: - [Cancel](#cancellation): Canceling a Workflow provides a graceful way to stop Workflow Execution. - [Terminate](#termination): Terminating a Workflow forcefully stops Workflow Execution. Terminating a Workflow forcefully stops Workflow Execution. This action resembles killing a process. - The system records a `WorkflowExecutionTerminated` event in the Workflow History. - The termination forcefully and immediately stops the Workflow Execution. - The Workflow code gets no chance to handle termination. - A Workflow Task doesn't get scheduled. In most cases, canceling is preferable because it allows the Workflow to finish gracefully. Terminate only if the Workflow is stuck and cannot be canceled normally. ## Cancellation {#cancellation} To give a Workflow and its Activities the ability to be cancelled, do the following: - Handle a Cancellation request within a Workflow. - Set Activity Heartbeat Timeouts. - Listen for and handle a Cancellation request within an Activity. - Send a Cancellation request from a Temporal Client. ## Handle Cancellation in Workflow {#handle-cancellation-in-workflow} Workflow Definitions can be written to respond to cancellation requests. It is common for an Activity to be run on Cancellation to perform cleanup. Cancellation Requests on Workflows cancel the `Temporalio::Workflow.cancellation` which is a `Temporalio::Cancellation` that effectively serves as a cancellation token. This is the cancellation that is implicitly used for all calls within the workflow as well (e.g. Timers, Activities, etc) and therefore cancellation is propagated to them to be handled and bubble out. ```ruby class MyWorkflow < Temporalio::Workflow::Definition def execute # Whether this workflow waits on the activity to handle the cancellation or not is # dependent upon the cancellation_type parameter. We leave the default here which # sends the cancellation but does not wait on it to be handled. Temporalio::Workflow.execute_activity(MyActivity, start_to_close_timeout: 100) rescue Temporalio::Error => e # For this sample, we only want to execute cleanup when it's a cancellation raise unless Temporalio::Error.canceled?(e) # Call a cleanup activity. We have to do this with a new/detached cancellation # because the default workflow-level one is already canceled at this point. Temporalio::Workflow.execute_activity( MyCleanupActivity, start_to_close_timeout: 100, cancellation: Temporalio::Cancellation.new ) # Re-raise the original exception raise end end ``` ## Handle Cancellation in an Activity {#handle-cancellation-in-an-activity} Ensure that the Activity is [Heartbeating](/develop/ruby/failure-detection#activity-heartbeats) to receive the Cancellation request and stop execution. Also make sure that the [Heartbeat Timeout](/develop/ruby/failure-detection#heartbeat-timeout) is set on the Activity Options when calling from the Workflow. An Activity Cancellation Request raises a `Temporalio::Error::CanceledError` in the Activity. ```ruby class MyActivity < Temporalio::Activity::Definition def execute # This is a naive loop simulating work, but similar heartbeat/cancellation logic # applies to other scenarios as well loop do # Send heartbeat Temporalio::Activity::Context.current.heartbeat # Sleep before heartbeating again sleep(3) end rescue Temporalio::Error::CanceledError raise 'Canceled!' end end ``` ## Request Cancellation {#request-cancellation} Use `cancel` on the `WorkflowHandle` to cancel a Workflow Execution. ```ruby --- # Get a workflow handle by its workflow ID. This could be made specific to a run by --- # passing run ID. This could also just be a handle that is returned from --- # start_workflow instead. handle = my_client.workflow_handle('my-workflow-id') --- # Send cancellation. This returns when cancellation is received by the server. Wait on --- # the handle's result to wait for cancellation to be applied. handle.cancel ``` By default, Activities are automatically cancelled when the Workflow is cancelled since the workflow cancellation is used by activities by default. To issue a cancellation explicitly, a new cancellation token can be created. ```ruby class MyWorkflow < Temporalio::Workflow::Definition def execute # Create a new cancellation linked to the workflow one, so that it inherits # cancellation that comes from the workflow. Users can choose to make it # completely detached by not providing a parent. cancellation, cancel_proc = Temporalio::Cancellation.new( Temporalio::Workflow.cancellation ) # Start the activity in the background. Whether this workflow waits on the activity # to handle the cancellation or not is dependent upon the cancellation_type # parameter. We leave the default here which sends the cancellation but does not wait # on it to be handled. future = Temporalio::Future.new do Temporalio::Workflow.execute_activity( MyActivity, start_to_close_timeout: 100, cancellation: ) end # Wait 5 minutes, then cancel it Temporalio::Workflow.sleep(5 * 60) cancel_proc.call # Wait on the activity which will raise an activity error with a cause of # cancellation which will fail the workflow future.wait end end ``` ## Termination {#termination} To Terminate a Workflow Execution in Ruby, use the `terminate` method on the Workflow handle. ```ruby --- # Get a workflow handle by its workflow ID. This could be made specific to a run by --- # passing run ID. This could also just be a handle that is returned from --- # start_workflow instead. handle = my_client.workflow_handle('my-workflow-id') --- # Terminate handle.terminate ``` Workflow Executions can also be Terminated directly from the WebUI. In this case, a custom note can be logged from the UI when that happens. ## Reset a Workflow Execution {#reset} Resetting a Workflow Execution terminates the current Workflow Execution and starts a new Workflow Execution from a point you specify in its Event History. Use reset when a Workflow is blocked due to a non-deterministic error or other issues that prevent it from completing. When you reset a Workflow, the Event History up to the reset point is copied to the new Workflow Execution, and the Workflow resumes from that point with the current code. Reset only works if you've fixed the underlying issue, such as removing non-deterministic code. Any progress made after the reset point will be discarded. Provide a reason when resetting, as it will be recorded in the Event History. 1. Navigate to the Workflow Execution details page, 2. Click the **Reset** button in the top right dropdown menu, 3. Select the Event ID to reset to, 4. Provide a reason for the reset, 5. Confirm the reset. The Web UI shows available reset points and creates a link to the new Workflow Execution after the reset completes. Use the `temporal workflow reset` command to reset a Workflow Execution: ```bash temporal workflow reset \ --workflow-id \ --event-id \ --reason "Reason for reset" ``` For example: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" ``` By default, the command resets the latest Workflow Execution in the `default` Namespace. Use `--run-id` to reset a specific run. Use `--namespace` to specify a different Namespace: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" \ --namespace my-namespace \ --tls-cert-path /path/to/cert.pem \ --tls-key-path /path/to/key.pem ``` Monitor the new Workflow Execution after resetting to ensure it completes successfully. --- ## Workflow message passing - Ruby SDK A Workflow can act like a stateful service that receives messages: Queries, Signals, and Updates. These messages interact with the Workflow via handler methods defined in the Workflow code. Clients use messages to read Workflow state or change its behavior. See [Workflow message passing](/encyclopedia/workflow-message-passing) for a general overview. ## Write message handlers {#writing-message-handlers} :::info The code that follows is part of a [working solution](https://github.com/temporalio/samples-ruby/tree/main/message_passing_simple). ::: Follow these guidelines when writing your message handlers: - Message handlers are defined as methods on the Workflow class, decorated by calling one of three class methods before defining the handler method: `workflow_query`, `workflow_signal`, and `workflow_update`. - These also implicitly create class-methods with the same name as the instance methods for use by callers. - The parameters and return values of handlers and the main Workflow function must be [serializable](/dataconversion). - Prefer single hash/object input parameter to multiple input parameters. Hash/object parameters allow you to add fields without changing the calling signature. ### Query handlers {#queries} A [Query](/sending-messages#sending-queries) is a synchronous operation that retrieves state from a Workflow Execution. Define as a method: ```ruby class GreetingWorkflow < Temporalio::Workflow::Definition # ... workflow_query def languages(input) # A query handler returns a value: it can inspect but must not mutate the Workflow state. if input['include_unsupported'] CallGreetingService.greetings.keys.sort else @greetings.keys.sort end end # ... end ``` Or as an attribute reader: ```ruby class GreetingWorkflow < Temporalio::Workflow::Definition # This is the equivalent of: # workflow_query # def language # @language # end workflow_query_attr_reader :language # ... end ``` - The `workflow_query` class method can accept arguments. See the API reference docs: [`workflow_query`](https://ruby.temporal.io/Temporalio/Workflow/Definition.html#workflow_query-class_method). - A Query handler must not modify Workflow state. - You can't perform async blocking operations such as executing an Activity in a Query handler. ### Signal handlers {#signals} A [Signal](/sending-messages#sending-signals) is an asynchronous message sent to a running Workflow Execution to change its state and control its flow: ```ruby class GreetingWorkflow < Temporalio::Workflow::Definition # ... workflow_signal def approve(input) # A signal handler mutates the workflow state but cannot return a value. @approved_for_release = true @approver_name = input['name'] end # ... end ``` - The `workflow_signal` class method can accept arguments. Refer to the API docs: [`workflow_signal`](https://ruby.temporal.io/Temporalio/Workflow/Definition.html#workflow_signal-class_method). - The handler should not return a value. The response is sent immediately from the server, without waiting for the Workflow to process the Signal. - Signal (and Update) handlers can be asynchronous and blocking. This allows you to use Activities, Child Workflows, durable Timers, wait conditions, and more. See [Async handlers](#async-handlers) and [Workflow message passing](/encyclopedia/workflow-message-passing) for guidelines on safely using async Signal and Update handlers. ### Update handlers and validators {#updates} An [Update](/sending-messages#sending-updates) is a trackable synchronous request sent to a running Workflow Execution. It can change the Workflow state, control its flow, and return a result. The sender must wait until the Worker accepts or rejects the Update. The sender may wait further to receive a returned value or an exception if something goes wrong: ```ruby class GreetingWorkflow < Temporalio::Workflow::Definition # ... workflow_update def set_language(new_language) # rubocop:disable Naming/AccessorMethodName # An update handler can mutate the workflow state and return a value. prev = @language.to_sym @language = new_language.to_sym prev end workflow_update_validator(:set_language) def validate_set_language(new_language) # In an update validator you raise any exception to reject the update. raise "#{new_language} is not supported" unless @greetings.include?(new_language.to_sym) end # ... end ``` - The `workflow_update` class method can take arguments as described in the API reference docs for [`workflow_update`](https://ruby.temporal.io/Temporalio/Workflow/Definition.html#workflow_update-class_method). - About validators: - Use validators to reject an Update before it is written to History. Validators are always optional. If you don't need to reject Updates, you can skip them. - Define an Update validator with the [`workflow_update_validator`](https://ruby.temporal.io/Temporalio/Workflow/Definition.html#workflow_update-class_method) class method invoked before defining the method. The first parameter when declaring the validator is the name of the Update handler method. The validator must accept the same argument types as the handler and should not return a value. - Accepting and rejecting Updates with validators: - To reject an Update, raise an exception of any type in the validator. - Without a validator, Updates are always accepted. - Validators and Event History: - The `WorkflowExecutionUpdateAccepted` event is written into the History whether the acceptance was automatic or programmatic. - When a Validator raises an error, the Update is rejected, the Update is not run, and `WorkflowExecutionUpdateAccepted` _won't_ be added to the Event History. The caller receives an "Update failed" error. - Use [`current_update_info`](https://ruby.temporal.io/Temporalio/Workflow.html#current_update_info-class_method) to obtain information about the current Update. This includes the Update ID, which can be useful for deduplication when using Continue-As-New: see [Ensuring your messages are processed exactly once](/handling-messages#exactly-once-message-processing). - Update (and Signal) handlers can be asynchronous and blocking. This allows you to use Activities, Child Workflows, durable Timers, wait conditions, and more. See [Async handlers](#async-handlers) and [Workflow message passing](/encyclopedia/workflow-message-passing) for guidelines on safely using async Update and Signal handlers. ## Send messages {#send-messages} To send Queries, Signals, or Updates you call methods on a [`WorkflowHandle`](https://ruby.temporal.io/Temporalio/Client/WorkflowHandle.html) instance. To obtain the Workflow handle, you can: - Use [`Client#start_workflow`](https://ruby.temporal.io/Temporalio/Client.html#start_workflow-instance_method) to start a Workflow and return its handle. - Use the [`Client#workflow_handle`](https://ruby.temporal.io/Temporalio/Client.html#workflow_handle-instance_method) method to retrieve a Workflow handle by its Workflow Id. For example: ```ruby client = Temporalio::Client.connect('localhost:7233', 'default') handle = client.start_workflow( MessagePassingSimple::GreetingWorkflow, id: 'message-passing-simple-sample-workflow-id', task_queue: 'message-passing-simple-sample' ) ``` To check the argument types required when sending messages -- and the return type for Queries and Updates -- refer to the corresponding handler method in the Workflow Definition. :::warning Using Continue-as-New and Updates - Temporal _does not_ support Continue-as-New functionality within Update handlers. - Complete all handlers _before_ using Continue-as-New. - Use Continue-as-New from your main Workflow Definition method, just as you would complete or fail a Workflow Execution. ::: ### Send a Query {#send-query} Call a Query method with [`WorkflowHandle#query`](https://ruby.temporal.io/Temporalio/Client/WorkflowHandle.html#query-instance_method): ```ruby supported_languages = handle.query(MessagePassingSimple::GreetingWorkflow.languages, { include_unsupported: false }) ``` - Sending a Query doesn’t add events to a Workflow's Event History. - You can send Queries to closed Workflow Executions within a Namespace's Workflow retention period. This includes Workflows that have completed, failed, or timed out. Querying terminated Workflows is not safe and, therefore, not supported. - A Worker must be online and polling the Task Queue to process a Query. ### Send a Signal {#send-signal} You can send a Signal to a Workflow Execution from a Temporal Client or from another Workflow Execution. However, you can only send Signals to Workflow Executions that haven’t closed. #### From a Client {#send-signal-from-client} Use [`WorkflowHandle#signal`](https://ruby.temporal.io/Temporalio/Client/WorkflowHandle.html#signal-instance_method) from Client code to send a Signal: ```ruby handle.signal(MessagePassingSimple::GreetingWorkflow.approve, { name: 'John Q. Approver' }) ``` - The call returns when the server accepts the Signal; it does _not_ wait for the Signal to be delivered to the Workflow Execution. - The [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the Workflow's Event History. #### From a Workflow {#send-signal-from-workflow} A Workflow can send a Signal to another Workflow, known as an _External Signal_. In this case you need to obtain a Workflow handle for the external Workflow. Use `Temporalio::Workflow.external_workflow_handle`, passing a running Workflow Id, to retrieve a Workflow handle: ```ruby class WorkflowB < Temporalio::Workflow::Definition def execute handle = Temporalio::Workflow.external_workflow_handle('workflow-a-id') handle.signal(WorkflowA.some_signal, 'some signal arg') end end ``` When an External Signal is sent: - A [SignalExternalWorkflowExecutionInitiated](/references/events#signalexternalworkflowexecutioninitiated) Event appears in the sender's Event History. - A [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the recipient's Event History. #### Signal-With-Start {#signal-with-start} Signal-With-Start allows a Client to send a Signal to a Workflow Execution, starting the Execution if it is not already running. If there's a Workflow running with the given Workflow Id, it will be signaled. If there isn't, a new Workflow will be started and immediately signaled. To use Signal-With-Start, call `signal_with_start_workflow` with a `WithStartWorkflowOperation`: ```ruby client = Temporalio::Client.connect('localhost:7233', 'default') --- # Create start-workflow operation for use with signal-with-start start_workflow_operation = Temporalio::Client::WithStartWorkflowOperation.new( MyWorkflow, 'my-workflow-input', id: 'my-workflow-id', task_queue: 'my-workflow-task-queue' ) --- # Perform signal-with-start handle = client.signal_with_start_workflow( MyWorkflow.my_signal, 'signal-input', start_workflow_operation: ) ``` ### Send an Update {#send-update-from-client} An Update is a synchronous, blocking call that can change Workflow state, control its flow, and return a result. A Client sending an Update must wait until the Server delivers the Update to a Worker. Workers must be available and responsive. If you need a response as soon as the Server receives the request, use a Signal instead. Also note that you can't send Updates to other Workflow Executions. - `WorkflowExecutionUpdateAccepted` is added to the Event History when the Worker confirms that the Update passed validation. - `WorkflowExecutionUpdateCompleted` is added to the Event History when the Worker confirms that the Update has finished. To send an Update to a Workflow Execution, you can: - Call the Update method with `execute_update` from the Workflow handle and wait for the Update to complete. This code fetches an Update result: ```ruby prev_language = handle.execute_update(MessagePassingSimple::GreetingWorkflow.set_language, :chinese) ``` - 2. Use `start_update` to receive a handle as soon as the Update is accepted. It returns a `WorkflowUpdateHandle` - Use this `WorkflowUpdateHandle` later to fetch your results. - Asynchronous Update handlers normally perform long-running async Activities. - `start_update` only waits until the Worker has accepted or rejected the Update, not until all asynchronous operations are complete. For example: ```ruby # Start an update and then wait for it to complete update_handle = handle.start_update( MessagePassingSimple::GreetingWorkflow.apply_language_with_lookup, :arabic, wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED ) prev_language = update_handle.result ``` For more details, see the "Async handlers" section. #### Update-With-Start {#update-with-start} :::tip Stability In [Public Preview](/evaluate/development-production-features/release-stages#public-preview) in Temporal Cloud. Minimum Temporal Server version [Temporal Server version 1.26](https://github.com/temporalio/temporal/releases/tag/v1.26.2) ::: [Update-with-Start](/sending-messages#update-with-start) lets you [send an Update](/develop/ruby/message-passing#send-update-from-client) that checks whether an already-running Workflow with that ID exists: - If the Workflow exists, the Update is processed. - If the Workflow does not exist, a new Workflow Execution is started with the given ID, and the Update is processed before the main Workflow method starts to execute. Use `execute_update_with_start_workflow` to start the Update and wait for the result in one go. Alternatively, use `start_update_with_start_workflow` to start the Update and receive a `WorkflowUpdateHandle`, and then use `update_handle.result` to retrieve the result from the Update. These calls return once the requested Update wait stage has been reached, or when the request times out. - You will need to provide a `WithStartWorkflowOperation` to define the Workflow that will be started if necessary, and its arguments. - You must specify an [id_conflict_policy](/workflow-execution/workflowid-runid#workflow-id-conflict-policy) when creating the `WithStartWorkflowOperation`. Note that a `WithStartWorkflowOperation` can only be used once. Here's an example: ```ruby client = Temporalio::Client.connect('localhost:7233', 'default') --- # Create start-workflow operation for use with update-with-start start_workflow_operation = Temporalio::Client::WithStartWorkflowOperation.new( MyWorkflow, 'my-workflow-input', id: 'my-workflow-id', task_queue: 'my-workflow-task-queue', id_conflict_policy: Temporalio::WorkflowIDConflictPolicy::USE_EXISTING ) --- # Perform update-with-start and get update result update_result = client.execute_with_start_workflow( MyWorkflow.my_update, 'update-input', start_workflow_operation: ) --- # The workflow handle is on the start operation, here's an example of waiting on --- # workflow result workflow_result = start_workflow_operation.workflow_handle.result ``` ## Message handler patterns {#message-handler-patterns} This section covers common write operations, such as Signal and Update handlers. It doesn't apply to pure read operations, like Queries or Update Validators. :::tip For additional information, see [Inject work into the main Workflow](/handling-messages#injecting-work-into-main-workflow) and [Ensuring your messages are processed exactly once](/handling-messages#exactly-once-message-processing). ::: ### Add async handlers {#async-handlers} Signal and Update handlers can be asynchronous as well as blocking. Using asynchronous calls allows you to wait for Activities, Child Workflows, Durable Timers, wait conditions, etc. This expands the possibilities for what can be done by a handler but it also means that handler executions and your main Workflow method are all running concurrently, with switching occurring between them at await calls. It's essential to understand the things that could go wrong in order to use asynchronous handlers safely. See [Workflow message passing](/encyclopedia/workflow-message-passing) for guidance on safe usage of async Signal and Update handlers, and the [Controlling handler concurrency](#control-handler-concurrency) and [Waiting for message handlers to finish](#wait-for-message-handlers) sections below. The following code is an Activity that simulates a network call to a remote service: ```ruby class CallGreetingService < Temporalio::Activity::Definition def execute(to_language) # Simulate a network call sleep(0.2) # This intentionally returns nil on not found CallGreetingService.greetings[to_language.to_sym] end def self.greetings @greetings ||= { arabic: 'مرحبا بالعالم', chinese: '你好,世界', english: 'Hello, world', french: 'Bonjour, monde', hindi: 'नमस्ते दुनिया', portuguese: 'Olá mundo', spanish: 'Hola mundo' } end end ``` The following code is a Workflow Update for asynchronous use of the preceding Activity: ```ruby class GreetingWorkflow < Temporalio::Workflow::Definition # ... workflow_update def apply_language_with_lookup(new_language) # Call an activity if it's not there. unless @greetings.include?(new_language.to_sym) # We use a mutex so that, if this handler is executed multiple times, each execution # can schedule the activity only when the previously scheduled activity has # completed. This ensures that multiple calls to apply_language_with_lookup are # processed in order. @apply_language_mutex ||= Mutex.new @apply_language_mutex.synchronize do greeting = Temporalio::Workflow.execute_activity( CallGreetingService, new_language, start_to_close_timeout: 10 ) # The requested language might not be supported by the remote service. If so, we # raise ApplicationError, which will fail the update. The # WorkflowExecutionUpdateAccepted event will still be added to history. (Update # validators can be used to reject updates before any event is written to history, # but they cannot be async, and so we cannot use an update validator for this # purpose.) raise Temporalio::Error::ApplicationError, "Greeting service does not support #{new_language}" unless greeting @greetings[new_language.to_sym] = greeting end end set_language(new_language) end end ``` After updating the code for asynchronous calls, your Update handler can schedule an Activity and await the result. Although an async Signal handler can initiate similar network tasks, using an Update handler allows the Client to receive a result or error once the Activity completes. This lets your Client track the progress of asynchronous work performed by the Update's Activities, Child Workflows, etc. ### Use wait conditions {#block-with-wait} Sometimes, async Signal or Update handlers need to meet certain conditions before they should continue. Using a wait condition with [`wait_condition`](https://ruby.temporal.io/Temporalio/Workflow.html#wait_condition-class_method) sets a function that prevents the code from proceeding until the condition is truthy. This is an important feature that helps you control your handler logic. Here are two important use cases for `wait_condition`: - Waiting in a handler until it is appropriate to continue. - Waiting in the main Workflow until all active handlers have finished. The condition state you're waiting for can be updated by and reflect any part of the Workflow code. This includes the main Workflow method, other handlers, or child coroutines spawned by the main Workflow method, and so forth. #### In handlers {#wait-in-handlers} Sometimes, async Signal or Update handlers need to meet certain conditions before they should continue. Using a wait condition with [`wait_condition`](https://ruby.temporal.io/Temporalio/Workflow.html#wait_condition-class_method) sets a function that prevents the code from proceeding until the condition is truthy. This is an important feature that helps you control your handler logic. Consider a `ready_for_update_to_execute` method that runs before your Update handler executes. The `wait_condition` call waits until your condition is met: ```ruby workflow_update def my_update(my_update_input) Temporalio::Workflow.wait_condition { ready_for_update_to_execute(my_update_input) } # ... end ``` Remember: Handlers can execute before the main Workflow method starts. #### Before finishing the Workflow {#wait-for-message-handlers} Workflow wait conditions can ensure your handler completes before a Workflow finishes. When your Workflow uses async Signal or Update handlers, your main Workflow method can return or continue-as-new while a handler is still waiting on an async task, such as an Activity result. The Workflow completing may interrupt the handler before it finishes crucial work and cause Client errors when trying retrieve Update results. Use `Temporalio::Workflow.all_handlers_finished?` to address this problem and allow your Workflow to end smoothly: ```ruby class MyWorkflow < Temporalio::Workflow::Definition def execute # ... Temporalio::Workflow.wait_condition { Temporalio::Workflow.all_handlers_finished? } 'workflow-result' end end ``` By default, your Worker will log a warning when you allow a Workflow Execution to finish with unfinished handler executions. You can silence these warnings on a per-handler basis by passing the `unfinished_policy` argument to the [`workflow_signal`](https://ruby.temporal.io/Temporalio/Workflow/Definition.html#workflow_signal-class_method) / [`workflow_update`](https://ruby.temporal.io/Temporalio/Workflow/Definition.html#workflow_update-class_method) class methods: ```ruby workflow_update unfinished_policy: Temporalio::Workflow::HandlerUnfinishedPolicy::ABANDON def my_update # ... ``` See [Finishing handlers before the Workflow completes](/handling-messages#finishing-message-handlers) for more information. ### Use workflow_init to access input early The `workflow_init` class method above `initialize` gives it access to [Workflow input](/handling-messages#workflow-initializers). When you use the `workflow_init` on your constructor, you give the constructor the same Workflow parameters as your `execute` method. The SDK will then ensure that your constructor receives the Workflow input arguments that the [Client sent](/develop/ruby/temporal-client#start-workflow). The Workflow input arguments are also passed to your `execute` method -- that always happens, whether or not you use the `workflow_init` class method above `initialize`. Here's an example. The constructor and `execute` must have the same parameters with the same types: ```ruby class WorkflowInitWorkflow < Temporalio::Workflow::Definition workflow_init def initialize(input) @name_with_title = "Sir #{input['name']}" end def execute(input) Temporalio::Workflow.wait_condition { @title_has_been_checked } "Hello, #{@name_with_title}" end workflow_update def check_title_validity # The handler is now guaranteed to see some workflow input since it was # processed by the constructor valid = Temporalio::Workflow.execute_activity( CheckTitleValidityActivity, @name_with_title, start_to_close_timeout: 100 ) @title_has_been_checked = true valid end end ``` ### Use locks to prevent concurrent handler execution {#control-handler-concurrency} Concurrent processes can interact in unpredictable ways. Incorrectly written [concurrent message-passing](/handling-messages#message-handler-concurrency) code may not work correctly when multiple handler instances run simultaneously. Here's an example of a pathological case: ```ruby class MyWorkflow < Temporalio::Workflow::Definition # ... workflow_signal def bad_handler data = Temporalio::Workflow.execute_activity( FetchDataActivity, start_to_close_timeout: 100 ) @x = data['x'] # 🐛🐛 Bug!! If multiple instances of this handler are executing concurrently, then # there may be times when the Workflow has @x from one Activity execution and @y # from another. Temporalio::Workflow.sleep(1) @y = data['y'] end end ``` Coordinating access with `Mutex`, a mutual exclusion lock, corrects this code. Locking makes sure that only one handler instance can execute a specific section of code at any given time: ```ruby class MyWorkflow < Temporalio::Workflow::Definition # ... workflow_signal def safe_handler @mutex ||= Mutex.new @mutex.synchronize do data = Temporalio::Workflow.execute_activity( FetchDataActivity, start_to_close_timeout: 100 ) @x = data['x'] # 🐛🐛 Bug!! If multiple instances of this handler are executing concurrently, then # there may be times when the Workflow has @x from one Activity execution and @y # from another. Temporalio::Workflow.sleep(1) @y = data['y'] end end end ``` For additional concurrency options, `wait_condition` can be used to do more advanced things such as using an integer attribute + `wait_condition` as a semaphore. ## Troubleshooting {#message-handler-troubleshooting} When sending a Signal, Update, or Query to a Workflow, your Client might encounter the following errors: - **The Client can't contact the server**: You'll receive a [`Temporalio::Error::RPCError`](https://ruby.temporal.io/Temporalio/Error/RPCError.html) exception whose `code` is an `UNAVAILABLE` constant defined in [`Code`](https://ruby.temporal.io/Temporalio/Error/RPCError/Code.html) (after some retries). - **The Workflow does not exist**: You'll receive a [`Temporalio::Error::RPCError`](https://ruby.temporal.io/Temporalio/Error/RPCError.html) exception whose `code` is a `NOT_FOUND` constant defined in [`Code`](https://ruby.temporal.io/Temporalio/Error/RPCError/Code.html). See [Exceptions in message handlers](/handling-messages#exceptions) for a non–Ruby-specific discussion of this topic. ### Signal issues {#signal-problems} When using Signal, the only exception that will result from your requests during its execution is `RPCError`. All handlers may experience additional exceptions during the initial (pre-Worker) part of a handler request lifecycle. For Queries and Updates, the Client waits for a response from the Worker. If an issue occurs during the handler Execution by the Worker, the Client may receive an exception. ### Update issues {#update-problems} When working with Updates, you may encounter these errors: - **No Workflow Workers are polling the Task Queue**: Your request will be retried by the SDK Client indefinitely. Use a `Cancellation` in your [RPC options](https://ruby.temporal.io/Temporalio/Client/RPCOptions.html) to cancel the Update. This raises a [WorkflowUpdateRPCTimeoutOrCanceledError](https://ruby.temporal.io/Temporalio/Error/WorkflowUpdateRPCTimeoutOrCanceledError.html) exception . - **Update failed**: You'll receive a [`WorkflowUpdateFailedError`](https://ruby.temporal.io/Temporalio/Error/WorkflowUpdateFailedError.html) exception. There are two ways this can happen: - The Update was rejected by an Update validator defined in the Workflow alongside the Update handler. - The Update failed after having been accepted. Update failures are like [Workflow failures](/references/failures). Issues that cause a Workflow failure in the main method also cause Update failures in the Update handler. These might include: - A failed Child Workflow - A failed Activity (if the Activity retries have been set to a finite number) - The Workflow author raising `ApplicationError` - Any error listed in `workflow_failure_exception_types` on the Worker or [`workflow_failure_exception_type`](https://ruby.temporal.io/Temporalio/Workflow/Definition.html#workflow_failure_exception_type-class_method) on the Workflow (empty by default) - **The handler caused the Workflow Task to fail**: A [Workflow Task Failure](/references/failures) causes the server to retry Workflow Tasks indefinitely. What happens to your Update request depends on its stage: - If the request hasn't been accepted by the server, you receive a `FAILED_PRECONDITION` [`Temporalio::Error::RPCError`](https://ruby.temporal.io/Temporalio/Error/RPCError.html) exception. - If the request has been accepted, it is durable. Once the Workflow is healthy again after a code deploy, use an [`WorkflowUpdateHandle`](https://ruby.temporal.io/Temporalio/Client/WorkflowUpdateHandle.html) to fetch the Update result. - **The Workflow finished while the Update handler execution was in progress**: You'll receive a [`Temporalio::Error::RPCError`](https://ruby.temporal.io/Temporalio/Error/RPCError.html) "workflow execution already completed". This will happen if the Workflow finished while the Update handler execution was in progress, for example because - The Workflow was canceled or failed. - The Workflow completed normally or continued-as-new and the Workflow author did not [wait for handlers to be finished](/handling-messages#finishing-message-handlers). ### Query issues {#query-problems} When working with Queries, you may encounter these errors: - **There is no Workflow Worker polling the Task Queue**: You'll receive a [`Temporalio::Error::RPCError`](https://ruby.temporal.io/Temporalio/Error/RPCError.html) exception whose `code` is a `FAILED_PRECONDITION` constant defined in [`Code`](https://ruby.temporal.io/Temporalio/Error/RPCError/Code.html). - **Query failed**: You'll receive a [`WorkflowQueryFailedError`](https://ruby.temporal.io/Temporalio/Error/WorkflowQueryFailedError.html) exception if something goes wrong during a Query. Any exception in a Query handler will trigger this error. This differs from Signal and Update requests, where exceptions can lead to Workflow Task Failure instead. - **The handler caused the Workflow Task to fail.** This would happen, for example, if the Query handler blocks the thread for too long without yielding. ## Dynamic handlers {#dynamic-handler} Temporal supports Dynamic Queries, Signals, Updates, Workflows, and Activities. These are unnamed handlers that are invoked if no other statically defined handler with the given name exists. Dynamic Handlers provide flexibility to handle cases where the names of Queries, Signals, Updates, Workflows, or Activities, aren't known at run time. :::caution Dynamic Handlers should be used judiciously as a fallback mechanism rather than the primary approach. Overusing them can lead to maintainability and debugging issues down the line. Instead, Signals, Queries, Workflows, or Activities should be defined statically whenever possible, with clear names that indicate their purpose. Use static definitions as the primary way of structuring your Workflows. Reserve Dynamic Handlers for cases where the handler names are not known at compile time and need to be looked up dynamically at runtime. They are meant to handle edge cases and act as a catch-all, not as the main way of invoking logic. ::: ### Dynamic Query {#set-a-dynamic-query} A Dynamic Query in Temporal is a Query method that is invoked dynamically at runtime if no other Query with the same name is registered. A Query can be made dynamic by setting `dynamic` to `true` on the `workflow_query` class method. Only one Dynamic Query can be present on a Workflow. The Query Handler parameters must accept a string name as the first parameter. Often users set `raw_args` to `true` and set the second parameter as `*args` which will be an array of `Temporalio::Converters::RawValue`. The [Temporalio::Workflow.payload_converter](https://ruby.temporal.io/Temporalio/Workflow.html#payload_converter-class_method) property is used to convert the raw value instances to proper types. ```ruby workflow_query dynamic: true, raw_args: true def dynamic_query(query_name, *args) first_param = Temporalio::Workflow.payload_converter.from_payload( args.first || raise 'Missing first parameter' ) "Got parameter #{first_param} for query #{query_name}" end ``` ### Dynamic Signal {#set-a-dynamic-signal} A Dynamic Signal in Temporal is a Signal that is invoked dynamically at runtime if no other Signal with the same input is registered. A Signal can be made dynamic by setting `dynamic` to `true` on the `workflow_signal` class method. Only one Dynamic Signal can be present on a Workflow. The Signal Handler parameters must accept a string name as the first parameter. Often users set `raw_args` to `true` and set the second parameter as `*args` which will be an array of `Temporalio::Converters::RawValue`. The [Temporalio::Workflow.payload_converter](https://ruby.temporal.io/Temporalio/Workflow.html#payload_converter-class_method) property is used to convert the raw value instances to proper types. ```ruby workflow_signal dynamic: true, raw_args: true def dynamic_signal(signal_name, *args) first_param = Temporalio::Workflow.payload_converter.from_payload( args.first || raise 'Missing first parameter' ) @pending_things << "Got parameter #{first_param} for signal #{signal_name}" end ``` ### Dynamic Update {#set-a-dynamic-update} A Dynamic Update in Temporal is an Update that is invoked dynamically at runtime if no other Update with the same input is registered. An Update can be made dynamic by setting `dynamic` to `true` on the `workflow_update` class method. Only one Dynamic Update can be present on a Workflow. The Query Handler parameters must accept a string name as the first parameter. Often users set `raw_args` to `true` and set the second parameter as `*args` which will be an array of `Temporalio::Converters::RawValue`. The [Temporalio::Workflow.payload_converter](https://ruby.temporal.io/Temporalio/Workflow.html#payload_converter-class_method) property is used to convert the raw value instances to proper types. ```ruby workflow_update dynamic: true, raw_args: true def dynamic_update(update_name, *args) first_param = Temporalio::Workflow.payload_converter.from_payload( args.first || raise 'Missing first parameter' ) @pending_things << "Got parameter #{first_param} for update #{update_name}" end ``` --- ## Observability - Ruby SDK This page covers capabilities related to viewing the state of the application, including: - [Metrics](#metrics) - [Tracing](#tracing) - [Logging](#logging) - [Visibility](#visibility) The observability guide covers the many ways to view the current state of your [Temporal Application](/temporal#temporal-application). This includes viewing [Workflow Executions](/workflow-execution) tracked by the [Temporal Platform](/temporal#temporal-platform), as well as inspecting state at any point during execution. ## Emit metrics {#metrics} Each Temporal SDK can optionally emit metrics from either the Client or Worker process. Metrics can be scraped by systems like Prometheus, and graphs can be created using tools like Grafana. - For an overview of Prometheus and Grafana integration, refer to the [Monitoring](/self-hosted-guide/monitoring) guide. - For a list of metrics, see the [SDK metrics reference](/references/sdk-metrics). Metrics in Ruby are configured on the `metrics` argument of the `telemetry` argument when creating a global `Temporalio::Runtime`. That object should be created globally and should be used for all clients; therefore, you should configure this before any other Temporal code. ## Set a Prometheus endpoint The following example exposes a Prometheus endpoint on port `9000`. ```ruby Temporalio::Runtime.default = Temporalio::Runtime.new( telemetry: Temporalio::Runtime::TelemetryOptions.new( metrics: Temporalio::Runtime::MetricsOptions.new( prometheus: Temporalio::Runtime::PrometheusMetricsOptions.new( bind_address: '0.0.0.0:9000' ) ) ) ) ``` ### Custom metric handling Instead of Prometheus or OpenTelemetry, an instance of `Temporalio::Runtime::MetricBuffer` can be provided as a `buffer` argument to the `MetricsOptions`. `retrieve_updates` can then be periodically called on the buffer to get metric updates. ## Setup Tracing {#tracing} Tracing enables observability into the sequence of calls across your application, including Workflows and Activities. OpenTelemetry tracing for clients, activities, and workflows can be enabled using the `Temporalio::Contrib::OpenTelemetry::TracingInterceptor`. Specifically, when creating a client, set the interceptor like so: ```ruby require 'opentelemetry/api' require 'opentelemetry/sdk' require 'temporalio/client' require 'temporalio/contrib/open_telemetry' --- # ... assumes my_otel_tracer_provider is a tracer provider created by the user my_tracer = my_otel_tracer_provider.tracer('my-otel-tracer') my_client = Temporalio::Client.connect( 'localhost:7233', 'my-namespace', interceptors: [Temporalio::Contrib::OpenTelemetry::TracingInterceptor.new(my_tracer)] ) ``` When your Client is connected, spans are created for all Client calls, Activities, and Workflow invocations on the Worker. Spans are created and serialized through the server to give one trace for a Workflow Execution. ## Log from a Workflow {#logging} Logging enables you to capture and persist important execution details from your Workflow and Activity code. Logging levels typically include: | Level | Use | | ------- | --------------------------------------------------------------------------------------------------------- | | `DEBUG` | Detailed information, typically useful for debugging purposes. | | `INFO` | General information about the application's operation. | | `WARN` | Indicates potentially harmful situations or minor issues that don't prevent the application from working. | | `ERROR` | Indicates error conditions that might still allow the application to continue running. | Logging uses the Ruby standard logging APIs. The `logger` can be set when connecting a client. The following example shows logging on the console and sets the level to `INFO`. ```ruby require 'logger' require 'temporalio/client' my_client = Temporalio::Client.connect( 'localhost:7233', 'my-namespace', logger: Logger.new($stdout, level: Logger::INFO) ) ``` You can log from a Workflow using `Temporalio::Workflow.logger` which is a special instance of Ruby's `Logger` that appends workflow details to every log and does not log during replay. ```ruby Temporalio::Workflow.logger.info("Some log #{some_value}") ``` There's also one for use in activities that appends Activity details to every log: ```ruby Temporalio::Activity::Context.current.logger.info("Some log #{some_value}") ``` ## Use Visibility APIs {#visibility} Visibility refers to Temporal features for listing, filtering, and inspecting Workflow Executions. ### Use Search Attributes {#search-attributes} - [Default Search Attributes](/search-attribute#default-search-attribute) like `WorkflowType`, `StartTime`, and `ExecutionStatus` are automatically indexed. - [Custom Search Attributes](/search-attribute#custom-search-attribute) let you store domain-specific metadata for Workflows. The typical method of retrieving a Workflow Execution is by its Workflow Id. However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments. You can do this with [Search Attributes](/search-attribute). - [Default Search Attributes](/search-attribute#default-search-attribute) like `WorkflowType`, `StartTime` and `ExecutionStatus` are automatically added to Workflow Executions. - _Custom Search Attributes_ can contain their own domain-specific data (like `customerId` or `numItems`). - A few [generic Custom Search Attributes](/search-attribute#custom-search-attribute) like `CustomKeywordField` and `CustomIntField` are created by default in Temporal's [Docker Compose](https://github.com/temporalio/docker-compose). The steps to using custom Search Attributes are: - Create a new Search Attribute in your Temporal Service in the CLI or Web UI. - For example: `temporal operator search-attribute create --name CustomKeywordField --type Text` - Replace `CustomKeywordField` with the name of your Search Attribute. - Replace `Text` with a type value associated with your Search Attribute: `Text` | `Keyword` | `Int` | `Double` | `Bool` | `Datetime` | `KeywordList` - Set the value of the Search Attribute for a Workflow Execution: - On the Client by including it as an argument when starting the Execution. - In the Workflow by calling `Temporalio::Workflow.upsert_search_attributes`. - Read the value of the Search Attribute: - On the Client by calling `describe` on a `WorkflowHandle`. - In the Workflow by looking at `Temporalio::Workflow.search_attributes`. - Query Workflow Executions by the Search Attribute using a [List Filter](/list-filter): - [In the Temporal CLI](/cli/operator#list-2) - In code by calling `list_workflows`. ### List Workflow Executions {#list-workflow-executions} Use the [list_workflows](https://ruby.temporal.io/Temporalio/Client.html#list_workflows-instance_method) method on the Client and pass a [List Filter](/list-filter) as an argument to filter the listed Workflows. The result is a lazy enumerator/enumerable. ```ruby my_client.list_workflows("WorkflowType='GreetingWorkflow'").each do |wf| puts "Workflow: #{wf.id}" end ``` ### Set Custom Search Attributes {#custom-search-attributes} After you've created custom Search Attributes in your Temporal Service (using `temporal operator search-attribute create`or the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow. To set custom Search Attributes, use the `search_attributes` parameter for `start_workflow` or `execute_workflow`. Keys should be predefined for reuse. ```ruby --- # Predefined search attribute key, usually a global somewhere MY_KEYWORD_KEY = Temporalio::SearchAttributes::Key.new( 'my-keyword', Temporalio::SearchAttributes::IndexedValueType::KEYWORD ) --- # Start workflow with the search attribute set handle = my_client.start_workflow( MyWorkflow, 'some-input', id: 'my-workflow-id', task_queue: 'my-task-queue', search_attributes: Temporalio::SearchAttributes.new({ MY_KEYWORD_KEY => 'some-value' }) ) ``` ### Upsert Search Attributes {#upsert-search-attributes} You can upsert Search Attributes to add, update, or remove Search Attributes from within Workflow code. To upsert custom Search Attributes, use the [`upsert_search_attributes`](https://ruby.temporal.io/Temporalio/Workflow.html#upsert_search_attributes-class_method) method with a set of updates. Keys should be predefined for reuse. ```ruby --- # Predefined search attribute key, usually a global somewhere MY_KEYWORD_KEY = Temporalio::SearchAttributes::Key.new( 'my-keyword', Temporalio::SearchAttributes::IndexedValueType::KEYWORD ) --- # ... class MyWorkflow < Temporalio::Workflow::Definition def execute # ... Temporalio::Workflow.upsert_search_attributes(MY_KEYWORD_KEY.value_set('some-new-value')) # ... end end ``` --- ## Rails integration - Ruby SDK Temporal Ruby SDK is a generic Ruby library that can work in any Ruby environment. However, there are some common conventions for Rails users to be aware of. See the [rails_app sample](https://github.com/temporalio/samples-ruby/tree/main/rails_app) for an example of using Temporal from Rails. ## ActiveRecord For ActiveRecord, or other general/ORM models that are used for a different purpose, it is not recommended to try to reuse them as Temporal models. Eventually model purposes diverge and models for a Temporal workflows/activities should be specific to their use for clarity and compatibility reasons. Also many Ruby ORMs do many lazy things and therefore provide unclear serialization semantics. Instead, consider having models specific for Workflows/Activities and translate to/from existing models as needed. See the [ActiveModel section](/develop/ruby/converters-and-encryption#active-model) on how to do this with ActiveModel objects. ## Lazy/Eager Loading By default, Rails eagerly loads all application code on application start in production, but lazily loads it in non-production environments. Temporal Workflows by default disallow use of IO during the Workflow run. With lazy loading enabled in dev/test environments, when an Activity class is referenced in a Workflow before it has been explicitly required, it can give an error like: ``` Cannot access File path from inside a workflow. If this is known to be safe, the code can be run in a Temporalio::Workflow::Unsafe.illegal_call_tracing_disabled block. ``` This comes from bootsnap via zeitwerk because it is lazily loading a class/module at Workflow runtime. It is not good to lazily load code during a Workflow run because it can be side effecting. Workflows and the classes they reference should be eagerly loaded. To resolve this, either always eagerly load (e.g. `config.eager_load = true`) or explicitly require what is used by a workflow at the top of the file. Note, this only affects non-production environments. --- ## Schedules - Ruby SDK This page shows how to do the following: - [Schedule a Workflow](#schedule-a-workflow) - [Create a Scheduled Workflow](#create-a-workflow) - [Backfill a Scheduled Workflow](#backfill-a-scheduled-workflow) - [Delete a Scheduled Workflow](#delete-a-scheduled-workflow) - [Describe a Scheduled Workflow](#describe-a-scheduled-workflow) - [List a Scheduled Workflow](#list-a-scheduled-workflow) - [Pause a Scheduled Workflow](#pause-a-scheduled-workflow) - [Trigger a Scheduled Workflow](#trigger-a-scheduled-workflow) - [Update a Scheduled Workflow](#update-a-scheduled-workflow) - [Use Start Delay](#start-delay) ## Schedule a Workflow {#schedule-a-workflow} Scheduling Workflows is a crucial aspect of automation. By scheduling a Workflow, you can automate repetitive tasks, reduce manual intervention, and ensure timely execution. Use the following actions to manage Scheduled Workflows. ### Create a Scheduled Workflow {#create-a-workflow} The create action enables you to create a new Schedule. When you create a new Schedule, a unique Schedule ID is generated, which you can use to reference the Schedule in other Schedule commands. To create a Scheduled Workflow Execution in Ruby, use the [create_schedule](https://ruby.temporal.io/Temporalio/Client.html#create_schedule-instance_method) method on the Client. Then pass the Schedule ID and the Schedule object to the method to create a Scheduled Workflow Execution. Set the Schedule's `action` member to an instance of `Temporalio::Client::Schedule::Action::StartWorkflow` to schedule a Workflow Execution. ```ruby handle = my_client.create_schedule( 'my_schedule_id', Temporalio::Client::Schedule.new( action: Temporalio::Client::Schedule::Action::StartWorkflow.new( MyWorkflow, 'some-input', id: 'my-workflow-id', task_queue: 'my-task-queue' ), spec: Temporalio::Client::Schedule::Spec.new( intervals: [ Temporalio::Client::Schedule::Spec::Interval.new( every: 5 * 24 * 60 * 60.0, # 5 days ) ] ) ) ) ``` :::tip Schedule Auto-Deletion Once a Schedule has completed creating all its Workflow Executions, the Temporal Service deletes it since it won’t fire again. The Temporal Service doesn't guarantee when this removal will happen. ::: ### Backfill a Scheduled Workflow {#backfill-a-scheduled-workflow} The backfill action executes Actions ahead of their specified time range. This command is useful when you need to execute a missed or delayed Action, or when you want to test the Workflow before its scheduled time. To backfill a Scheduled Workflow Execution in Ruby, use the [backfill](https://ruby.temporal.io/Temporalio/Client/ScheduleHandle.html#backfill-instance_method) method on the Schedule Handle. ```ruby handle = my_client.schedule_handle('my-schedule-id') now = Time.now(in: 'UTC') handle.backfill( Temporalio::Client::Schedule::Backfill.new( start_at: now - (4 * 60), end_at: now - (2 * 60), overlap: Temporalio::Client::Schedule::OverlapPolicy::ALLOW_ALL ) ) ``` ### Delete a Scheduled Workflow {#delete-a-scheduled-workflow} The delete action enables you to delete a Schedule. When you delete a Schedule, it does not affect any Workflows that were started by the Schedule. To delete a Scheduled Workflow Execution in Ruby, use the [delete](https://ruby.temporal.io/Temporalio/Client/ScheduleHandle.html#delete-instance_method) method on the Schedule Handle. ```ruby handle = my_client.schedule_handle('my-schedule-id') handle.delete ``` ### Describe a Scheduled Workflow {#describe-a-scheduled-workflow} The describe action shows the current Schedule configuration, including information about past, current, and future Workflow Runs. This command is helpful when you want to get a detailed view of the Schedule and its associated Workflow Runs. To describe a Scheduled Workflow Execution in Ruby, use the [describe](https://ruby.temporal.io/Temporalio/Client/ScheduleHandle.html#describe-instance_method) method on the Schedule Handle. ```ruby handle = my_client.schedule_handle('my-schedule-id') desc = handle.describe puts "Schedule info: #{desc.info}" ``` ### List a Scheduled Workflow {#list-a-scheduled-workflow} The list action lists all the available Schedules. This command is useful when you want to view a list of all the Schedules and their respective Schedule IDs. To list all schedules, use the [list_schedules](https://ruby.temporal.io/Temporalio/Client.html#list_schedules-instance_method) asynchronous method on the Client. This returns an enumerator/enumerable. If a schedule is added or deleted, it may not be available in the list immediately. ```ruby my_client.list_schedules.each do |sched| puts "Schedule info: #{sched}" end ``` ### Pause a Scheduled Workflow {#pause-a-scheduled-workflow} The pause action enables you to pause and unpause a Schedule. When you pause a Schedule, all the future Workflow Runs associated with the Schedule are temporarily stopped. This command is useful when you want to temporarily halt a Workflow due to maintenance or any other reason. To pause a Scheduled Workflow Execution in Ruby, use the [pause](https://ruby.temporal.io/Temporalio/Client/ScheduleHandle.html#pause-instance_method) method on the Schedule Handle. You can pass a note to the `pause` method to provide a reason for pausing the schedule. ```ruby handle = my_client.schedule_handle('my-schedule-id') handle.pause(note: 'Pausing the schedule for now') ``` ### Trigger a Scheduled Workflow {#trigger-a-scheduled-workflow} The trigger action triggers an immediate action with a given Schedule. By default, this action is subject to the Overlap Policy of the Schedule. This command is helpful when you want to execute a Workflow outside of its scheduled time. To trigger a Scheduled Workflow Execution in Ruby, use the [trigger](https://ruby.temporal.io/Temporalio/Client/ScheduleHandle.html#trigger-instance_method) method on the Schedule Handle. ```ruby handle = my_client.schedule_handle('my-schedule-id') handle.trigger ``` ### Update a Scheduled Workflow {#update-a-scheduled-workflow} The update action enables you to update an existing Schedule. This command is useful when you need to modify the Schedule's configuration, such as changing the start time, end time, or interval. To update a Scheduled Workflow Execution in Ruby, use the [update](https://ruby.temporal.io/Temporalio/Client/ScheduleHandle.html#update-instance_method) method on the Schedule Handle. This method accepts a block which itself accepts an update input object and is expected to return an update with a new schedule to update, or `nil` to not update. ```ruby handle = my_client.schedule_handle('my-schedule-id') handle.update do |input| # Return a new schedule with the action updated Temporalio::Client::Schedule::Update.new( schedule: input.description.schedule.with( # Update the action action: Temporalio::Client::Schedule::Action::StartWorkflow.new( MyNewWorkflow, 'some-new-input', id: 'my-workflow-id', task_queue: 'my-task-queue' ) ) ) end ``` ## Use Start Delay {#start-delay} Use the `start_delay` to schedule a Workflow Execution at a specific one-time future point rather than on a recurring schedule. Use the `start_delay` parameter on either the `start_workflow` or `execute_workflow` methods in the Client. ```ruby handle = my_client.start_workflow( MyWorkflow, 'some-input', id: 'my-workflow-id', task_queue: 'my-task-queue', start_delay: 3 * 60 * 60 # 3 hours ) ``` --- ## Set up your local with the Ruby SDK --- # Quickstart This guide walks you through setting up the Temporal Ruby SDK and running your first Workflow. In just a few steps, you'll install the SDK and start a local development server. To validate that your local environment is correctly installed, we will execute a Workflow that will output "Hello, Temporal". 1. Check your Ruby version: ruby -v You should see output like ruby 3.4.3. Ruby 3.2+ is required. We recommend Ruby 3.4.3. 2. Create your project folder: mkdir temporal-project cd temporal-project 3. Initialize with Bundler: bundle init 4. Add the Temporal Ruby SDK: bundle add temporalio You should see output like: Fetching gem metadata from https://rubygems.org/... Resolving dependencies... Installing temporalio 0.4.0 (arm64-darwin) Bundle complete! 1 Gemfile dependency, 6 gems now installed. 5. Install dependencies: bundle install }> ## Installation This step sets up a new Ruby project using Bundler and installs the Temporal Ruby SDK. We recommend using [Bundler](https://bundler.io/) to manage your Ruby project dependencies, including the Temporal SDK. These tutorials assume Ruby 3.4.3 or higher. Follow the steps to create a directory, initialize the project with a `Gemfile`, and add the Temporal SDK. **Note:** - Only macOS ARM/x64 and Linux ARM/x64 are supported. - Source gem is published but **cannot be built directly**. - Windows (MinGW) is not supported. - `fibers`/`async` are only supported on Ruby **3.3+**. - See [Platform Support](#) for full details. Install the Temporal CLI using Homebrew: brew install temporal Download the Temporal CLI archive for your architecture: Windows amd64 Windows arm64 Extract it and add temporal.exe to your PATH. Download the Temporal CLI for your architecture: Linux amd64 Linux arm64 Extract the archive and move the temporal binary into your PATH, for example: sudo mv temporal /usr/local/bin }> ## Install Temporal CLI The fastest way to get a development version of the Temporal Service running on your local machine is to use [Temporal CLI](https://docs.temporal.io/cli). Choose your operating system to install Temporal CLI. After installing, open a new Terminal window and start the development server: temporal server start-dev Change the Web UI port The Temporal Web UI may be on a different port in some examples or tutorials. To change the port for the Web UI, use the --ui-port option when starting the server: temporal server start-dev --ui-port 8080 The Temporal Web UI will now be available at http://localhost:8080. }> ## Start the development server Once you've installed Temporal CLI and added it to your PATH, open a new Terminal window and run the following command. This command starts a local Temporal Service. It starts the Web UI, creates the default Namespace, and uses an in-memory database. The Temporal Service will be available on localhost:7233. The Temporal Web UI will be available at http://localhost:8233. Leave the local Temporal Service running as you work through tutorials and other projects. You can stop the Temporal Service at any time by pressing CTRL+C. Once you have everything installed, you're ready to build apps with Temporal on your local machine. ## Run Hello World: Test Your Installation Now let's verify your setup is working by creating and running a complete Temporal application with both a Workflow and Activity. This test will confirm that: - The Temporal Ruby SDK is properly installed - Your local Temporal Service is running - You can successfully create and execute Workflows and Activities - The communication between components is functioning correctly ### 1. Create the Activity Create an Activity file (say_hello_activity.rb): ```ruby require 'temporalio/activity' --- # Implementation of a simple activity class SayHelloActivity < Temporalio::Activity::Definition def execute(name) "Hello, #{name}!" end end ``` ### 2. Create the Workflow Create a Workflow file (say_hello_workflow.rb): ```ruby require 'temporalio/workflow' require_relative 'say_hello_activity' class SayHelloWorkflow < Temporalio::Workflow::Definition def execute(name) Temporalio::Workflow.execute_activity( SayHelloActivity, name, schedule_to_close_timeout: 300 ) end end ``` ### 3. Create and Run the Worker With your Activity and Workflow defined, you need a Worker to execute them. Workers are a crucial part of your Temporal application as they're what actually execute the tasks defined in your Workflows and Activities. For more information on Workers, see [Understanding Temporal](/evaluate/understanding-temporal#workers) and a [deep dive into Workers](/workers). Create a Worker file (worker.rb): ```ruby require 'temporalio/client' require 'temporalio/worker' require_relative 'say_hello_activity' require_relative 'say_hello_workflow' --- # Create a client client = Temporalio::Client.connect('localhost:7233', 'default') --- # Create a worker with the client, activities, and workflows worker = Temporalio::Worker.new( client:, task_queue: 'my-task-queue', workflows: [SayHelloWorkflow], # There are various forms an activity can take, see "Activities" section for details activities: [SayHelloActivity] ) --- # Run the worker until SIGINT. This can be done in many ways, see "Workers" section for details. worker.run(shutdown_signals: ['SIGINT']) ``` Run the Worker: ```bash ruby worker.rb ``` ### 4. Execute the Workflow Now that your Worker is running, it's time to start a Workflow Execution. Create a separate file called starter.rb: ```ruby require 'temporalio/client' require_relative 'say_hello_workflow' --- # Create a client client = Temporalio::Client.connect('localhost:7233', 'default') --- # Run workflow result = client.execute_workflow( SayHelloWorkflow, 'Temporal', # This is the input to the workflow id: 'my-workflow-id', task_queue: 'my-task-queue' ) puts "Result: #{result}" ``` Then run: ```bash ruby starter.rb ``` ### Verify Success If everything is working correctly, you should see: - Worker processing the workflow and activity - Output: `Workflow result: Hello, Temporal!` - Workflow Execution details in the [Temporal Web UI](http://localhost:8233) Next: Run your first Temporal Application Create a basic Workflow and run it with the Temporal Ruby SDK --- ## Temporal Client - Ruby SDK A [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) enables you to communicate with the Temporal Service. Communication with a Temporal Service lets you perform actions such as starting Workflow Executions, sending Signals and Queries to Workflow Executions, getting Workflow results, and more. This page shows you how to do the following using the Ruby SDK with the Temporal Client: - [Connect to a local development Temporal Service](#connect-to-development-service) - [Connect to Temporal Cloud](#connect-to-temporal-cloud) - [Start a Workflow Execution](#start-workflow) - [Get Workflow results](#get-workflow-results) A Temporal Client cannot be initialized and used inside a Workflow. However, it is acceptable and common to use a Temporal Client inside an Activity to communicate with a Temporal Service. ## Connect to development Temporal Service {#connect-to-development-service} Use [`Client.connect`](https://ruby.temporal.io/Temporalio/Client.html#connect-class_method) to create a client. Connection options include the Temporal Server address, Namespace, and (optionally) TLS configuration. You can provide these options directly in code, load them from **environment variables**, or a **TOML configuration file** using the [`EnvConfig`](https://ruby.temporal.io/Temporalio/EnvConfig.html) helpers. We recommend environment variables or a configuration file for secure, repeatable configuration. When you’re running a Temporal Service locally (such as with the [Temporal CLI dev server](https://docs.temporal.io/cli/server#start-dev)), the required options are minimal. If you don't specify a host/port, most connections default to `127.0.0.1:7233` and the `default` Namespace. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the configuration file path, the SDK looks for it at the path `~/.config/temporalio/temporal.toml` or the equivalent on your OS. Refer to [Environment Configuration](../environment-configuration.mdx#configuration-methods) for more details about configuration files and profiles. :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines two profiles: `default` and `prod`. Each profile has its own set of connection options. ```toml title="config.toml" --- # Default profile for local development [profile.default] address = "localhost:7233" namespace = "default" --- # Optional: Add custom gRPC headers [profile.default.grpc_meta] my-custom-header = "development-value" trace-id = "dev-trace-123" --- # Production profile for Temporal Cloud [profile.prod] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" --- # TLS configuration for production [profile.prod.tls] --- # TLS auto-enables when TLS config or an API key is present --- # disabled = false client_cert_path = "/etc/temporal/certs/client.pem" client_key_path = "/etc/temporal/certs/client.key" --- # Custom headers for production [profile.prod.grpc_meta] environment = "production" service-version = "v1.2.3" ``` You can create a Temporal Client using a profile from the configuration file using the `ClientConfig.load_client_connect_options` function as follows. In this example, you load the `default` profile for local development: ```ruby require 'temporalio/client' require 'temporalio/env_config' def main puts '--- Loading default profile from config.toml ---' # For this sample to be self-contained, we explicitly provide the path to # the config.toml file included in this directory. # By default though, the config.toml file will be loaded from # ~/.config/temporalio/temporal.toml (or the equivalent standard config directory on your OS). config_file = File.join(__dir__, 'config.toml') # load_client_connect_options is a helper that loads a profile and prepares # the configuration for Client.connect. By default, it loads the # "default" profile. args, kwargs = Temporalio::EnvConfig::ClientConfig.load_client_connect_options( config_source: Pathname.new(config_file) ) puts "Loaded 'default' profile from #{config_file}." puts " Address: #{args[0]}" puts " Namespace: #{args[1]}" puts " gRPC Metadata: #{kwargs[:rpc_metadata]}" puts "\nAttempting to connect to client..." begin client = Temporalio::Client.connect(*args, **kwargs) puts '✅ Client connected successfully!' sys_info = client.workflow_service.get_system_info(Temporalio::Api::WorkflowService::V1::GetSystemInfoRequest.new) puts "✅ Successfully verified connection to Temporal server!\n#{sys_info}" rescue StandardError => e puts "❌ Failed to connect: #{e}" end end main if $PROGRAM_NAME == __FILE__ ``` Use the `EnvConfig` package to set connection options for the Temporal Client using environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](/references/client-environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. Set the following environment variables before running your application. Replace the placeholder values with your actual configuration. Since this is for a local development Temporal Service, the values connect to `localhost:7233` and the `default` Namespace. You may omit these variables entirely since they're the defaults. ```bash export TEMPORAL_NAMESPACE="default" export TEMPORAL_ADDRESS="localhost:7233" ``` After setting the environment variables, you can create a Temporal Client as follows: ```ruby {9} require 'temporalio/client' require 'temporalio/env_config' def main # load_client_connect_options is a helper that loads a profile and prepares # the configuration for Client.connect. By default, it loads the # "default" profile and also reads from environment variables. The environment # variables take precedence over the config file. args, kwargs = Temporalio::EnvConfig::ClientConfig.load_client_connect_options() puts " Address: #{args[0]}" puts " Namespace: #{args[1]}" puts " gRPC Metadata: #{kwargs[:rpc_metadata]}" puts "\nAttempting to connect to client..." begin client = Temporalio::Client.connect(*args, **kwargs) puts '✅ Client connected successfully!' sys_info = client.workflow_service.get_system_info(Temporalio::Api::WorkflowService::V1::GetSystemInfoRequest.new) puts "✅ Successfully verified connection to Temporal server!\n#{sys_info}" rescue StandardError => e puts "❌ Failed to connect: #{e}" end end main if $PROGRAM_NAME == __FILE__ ``` If you don't want to use environment variables or a configuration file, you can specify connection options directly in code. This is convenient for local development and testing. You can also load a base configuration from environment variables or a configuration file, and then override specific options in code. Use the `connect` class method on the `Temporalio::Client` class to create and connect to a Temporal Client to the Temporal Service. ```ruby client = Temporalio::Client.connect('localhost:7233', 'default') ``` ## Connect to Temporal Cloud {#connect-to-temporal-cloud} You can connect to Temporal Cloud using either an [API key](/cloud/api-keys) or through mTLS. Connection to Temporal Cloud or any secured Temporal Service requires additional connection options compared to connecting to an unsecured local development instance: - Your credentials for authentication. - If you are using an API key, provide the API key value. - If you are using mTLS, provide the mTLS CA certificate and mTLS private key. - Your _Namespace and Account ID_ combination, which follows the format `.`. - The _endpoint_ may vary. The most common endpoint used is the gRPC regional endpoint, which follows the format: `..api.temporal.io:7233`. - For Namespaces with High Availability features with API key authentication enabled, use the gRPC Namespace endpoint: `..tmprl.cloud:7233`. This allows automated failover without needing to switch endpoints. You can find the Namespace and Account ID, as well as the endpoint, on the Namespaces tab: ![The Namespace and Account ID combination on the left, and the regional endpoint on the right](/img/cloud/apikeys/namespaces-and-regional-endpoints.png) You can provide these connection options using environment variables, a configuration file, or directly in code. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. For a list of all available configuration options you can set in the TOML file, refer to [Environment Configuration](/references/client-environment-configuration). You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the path to the configuration file, the SDK looks for it at the default path `~/.config/temporalio/temporal.toml`. :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines a `staging` profile with the necessary connection options to connect to Temporal Cloud via an API key: For example, the following TOML configuration file defines a `staging` profile with the necessary connection options to connect to Temporal Cloud via an API key: ```toml --- # Cloud profile for Temporal Cloud [profile.staging] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" ``` If you want to use mTLS authentication instead of an API key, replace the `api_key` field with your mTLS certificate and private key: ```toml --- # Cloud profile for Temporal Cloud [profile.staging] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" tls_client_cert_data = "your-tls-client-cert-data" tls_client_key_path = "your-tls-client-key-path" ``` With the connections options defined in the configuration file, use the [`Client.connect` method](https://ruby.temporal.io/Temporalio/Client.html#connect-class_method) to create a Temporal Client using the `staging` profile as follows. After loading the profile, you can also programmatically override specific connection options before creating the client. ```ruby {8,14-16} require 'temporalio/client' require 'temporalio/env_config' def main puts "--- Loading 'staging' profile with programmatic overrides ---" config_file = File.join(__dir__, 'config.toml') profile_name = 'staging' puts "The 'staging' profile in config.toml has an incorrect address (localhost:9999)." puts "We'll programmatically override it to the correct address." # Load the 'staging' profile. args, kwargs = Temporalio::EnvConfig::ClientConfig.load_client_connect_options( profile: profile_name, config_source: Pathname.new(config_file) ) # Override the target host to the correct address. # This is the recommended way to override configuration values. args[0] = 'localhost:7233' puts "\nLoaded '#{profile_name}' profile from #{config_file} with overrides." puts " Address: #{args[0]} (overridden from localhost:9999)" puts " Namespace: #{args[1]}" puts "\nAttempting to connect to client..." begin client = Temporalio::Client.connect(*args, **kwargs) puts '✅ Client connected successfully!' sys_info = client.workflow_service.get_system_info(Temporalio::Api::WorkflowService::V1::GetSystemInfoRequest.new) puts "✅ Successfully verified connection to Temporal server!\n#{sys_info}" rescue StandardError => e puts "❌ Failed to connect: #{e}" end end main if $PROGRAM_NAME == __FILE__ ``` The following environment variables are required to connect to Temporal Cloud: - `TEMPORAL_NAMESPACE`: Your Namespace and Account ID combination in the format `.`. - `TEMPORAL_ADDRESS`: The gRPC endpoint for your Temporal Cloud Namespace. - `TEMPORAL_API_KEY`: Your API key value. Required if you are using API key authentication. - `TEMPORAL_TLS_CLIENT_CERT_DATA` or `TEMPORAL_TLS_CLIENT_CERT_PATH`: Your mTLS client certificate data or file path. Required if you are using mTLS authentication. - `TEMPORAL_TLS_CLIENT_KEY_DATA` or `TEMPORAL_TLS_CLIENT_KEY_PATH`: Your mTLS client private key data or file path. Required if you are using mTLS authentication. Ensure these environment variables exist in your environment before running your application. Import the `EnvConfig` package to set connection options for the Temporal Client using environment variables. The `MustLoadDefaultClientOptions` function will automatically load all environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](/references/client-environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. After setting the environment variables, use the following code to create the Temporal Client: ```ruby {9, 17} require 'temporalio/client' require 'temporalio/env_config' def main # load_client_connect_options is a helper that loads a profile and prepares # the configuration for Client.connect. By default, it loads the # "default" profile. It also reads from environment variables. The environment # variables take precedence over the config file. args, kwargs = Temporalio::EnvConfig::ClientConfig.load_client_connect_options() puts " Address: #{args[0]}" puts " Namespace: #{args[1]}" puts " gRPC Metadata: #{kwargs[:rpc_metadata]}" puts "\nAttempting to connect to client..." begin client = Temporalio::Client.connect(*args, **kwargs) puts '✅ Client connected successfully!' sys_info = client.workflow_service.get_system_info(Temporalio::Api::WorkflowService::V1::GetSystemInfoRequest.new) puts "✅ Successfully verified connection to Temporal server!\n#{sys_info}" rescue StandardError => e puts "❌ Failed to connect: #{e}" end end main if $PROGRAM_NAME == __FILE__ ``` You can also specify connection options directly in code to connect to Temporal Cloud. To create an initial connection, provide the endpoint, Namespace and Account ID combination, and API key values to the `Client.connect` method. ```ruby client = Temporalio::Client.connect( '', # Endpoint '.', # Namespace api_key: '', tls: true ) ``` To connect using mTLS instead of an API key, provide the mTLS certificate and private key as follows: ```ruby client = Temporalio::Client.connect( '', # Endpoint '.', # Namespace tls: Temporalio::Client::Connection::TLSOptions.new( client_cert: File.read('my-client-cert.pem'), client_private_key: File.read('my-client-key.pem') ) ) ``` For more information about managing and generating client certificates for Temporal Cloud, see [How to manage certificates in Temporal Cloud](/cloud/certificates). For more information about configuring TLS to secure inter- and intra-network communication for a Temporal Service, see [Temporal Customization Samples](https://github.com/temporalio/samples-server). ## Start a Workflow {#start-workflow} To start a Workflow Execution, supply: - A Task Queue - A Workflow Type - Input arguments - Workflow options such as Workflow Id To start a Workflow Execution in Ruby, use either the `start_workflow` or `execute_workflow` methods in the Client. You must set a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id) and [Task Queue](/task-queue) in the parameters given to the method. ```ruby result = my_client.execute_workflow( MyWorkflow, 'some-input', id: 'my-workflow-id', task_queue: 'my-task-queue' ) puts "Result: #{result}" ``` ## Get Workflow results {#get-workflow-results} Once a Workflow Execution is started, the Workflow Id and Run Id can be used to uniquely identify it. You can block until the result is available, or retrieve it later using the handle. You can also use Queries to access Workflow state and results while the Workflow is running. Use `start_workflow` or `workflow_handle` on the Client to return a Workflow handle. Then use the `result` method to await on the result of the Workflow. ```ruby handle = my_client.workflow_handle('my-workflow-id') result = handle.result puts "Result: #{result}" ``` --- ## Testing - Ruby SDK This page shows how to do the following: - [Understand types of tests](#types-of-tests) - [Use compatible test frameworks](#test-frameworks) - [Test Workflows](#testing-workflows) - [Test Activities](#test-activities) - [Replay tests](#replay-test) The Ruby test-suite feature guide describes the frameworks that facilitate Workflow and integration testing. ## Types of Tests {#types-of-tests} In the context of Temporal, you can create these types of automated tests: - **End-to-end:** Running a Temporal Server and Worker with all its Workflows and Activities; starting and interacting with Workflows from a Client. - **Integration:** Anything between end-to-end and unit testing. - Running Activities with mocked Context and other SDK imports (and usually network requests). - Running Workers with mock Activities, and using a Client to start Workflows. - Running Workflows with mocked SDK imports. - **Unit:** Running a piece of Workflow or Activity code and mocking any code it calls. We generally recommend writing the majority of your tests as integration tests. Because the test server supports skipping time, use the test server for both end-to-end and integration tests with Workers. ## Test frameworks {#test-frameworks} **Compatible testing frameworks** The Ruby SDK is compatible with any testing framework and does not have a specific recommendation. Most Ruby SDK samples use [minitest](https://github.com/minitest/minitest). ## Testing Workflows {#testing-workflows} Workflow testing can be done in an integration-test fashion against a real server, however it is hard to simulate timeouts and other long time-based code. Using the time-skipping Workflow test environment can help there. ### Testing Workflows with standard server A non-time-skipping `Temporalio::Testing::WorkflowEnvironment` can be started via `start_local` which supports all standard Temporal features. It is actually the real Temporal dev server packaged in the Temporal CLI, lazily downloaded on first use, and run as a sub-process in the background. Assuming tests properly use separate Task Queues, the same server can and should be reused across tests. Here's a simple example of a Workflow: ```ruby class SimpleWorkflow < Temporalio::Workflow::Definition def execute(name) "Hello, #{name}!" end end ``` Here's how a test of that Workflow may appear in minitest: ```ruby def test_simple_workflow # Start local server that is stopped when block is done Temporalio::Testing::WorkflowEnvironment.start_local do |env| # Start worker that is stopped when block is done worker = Temporalio::Worker.new( env.client, task_queue: "tq-#{SecureRandom.uuid}", workflows: [SimpleWorkflow] ) worker.run do # Execute workflow and check result result = env.client.execute_workflow( SimpleWorkflow, 'some-name', id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue ) assert_equal 'Hello, some-name!', result end end end ``` While this is just a demonstration, a local server is often used as a fixture across many tests. In minitest for instance, users often start the environment lazily (with no block), and shut it down inside a block passed to `Minitest.after_run`. ### Testing Workflows with time skipping Sometimes there is a need to test Workflows that run a long time or to test that timeouts occur. A time-skipping `Temporalio::Testing::WorkflowEnvironment` can be started via `start_time_skipping` which is a reimplementation of the Temporal server with special time skipping capabilities. Like `start_local`, this also lazily downloads the process to run when first called. Note, unlike `start_local`, this class is not thread safe nor safe for use with independent tests. It can be technically be reused, but only for one test at a time because time skipping is locked/unlocked at the environment level. Developers are encouraged to run it per test needed. #### Automatic time skipping Here's a simple example of a Workflow that waits a day: ```ruby class WaitADayWorkflow < Temporalio::Workflow::Definition def execute Temporalio::Workflow.sleep(1 * 24 * 60 * 60) 'all done' end end ``` A regular integration test of this Workflow on a normal server would be way too slow. However, the time-skipping server automatically skips to the next event when we wait on the result. Here's a test for that Workflow in minitest: ```ruby def test_wait_a_day_workflow # Start time-skipping test server that is stopped when block is done Temporalio::Testing::WorkflowEnvironment.start_time_skipping do |env| # Start worker that is stopped when block is done worker = Temporalio::Worker.new( env.client, task_queue: "tq-#{SecureRandom.uuid}", workflows: [WaitADayWorkflow] ) worker.run do # Execute workflow and check result result = env.client.execute_workflow( WaitADayWorkflow, id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue ) assert_equal 'all done', result end end end ``` This test will run almost instantly. This is because by calling `execute_workflow` on our client, we are actually calling `start_workflow` + `result`, and `result` automatically skips time as much as it can (basically until the end of the workflow or until an activity is run). To disable automatic time-skipping while waiting for a workflow result, run code in a block passed to `env.auto_time_skipping_disabled`. #### Manual time skipping Until a Workflow is waited on, all time skipping in the time-skipping environment is done manually via `WorkflowEnvironment#sleep`. Here's a Workflow that waits for a Signal or times out: ```ruby class SignalWorkflow < Temporalio::Workflow::Definition def execute # Wait for signal or timeout in 45 seconds Temporalio::Workflow.timeout(45 * 60) do Temporalio::Workflow.wait_condition { @signal_received } end 'got signal' rescue Timeout::Error 'got timeout' end workflow_signal def some_signal @signal_received = true end end ``` To test a normal Signal in minitest, you might: ```ruby def test_signal_workflow Temporalio::Testing::WorkflowEnvironment.start_time_skipping do |env| worker = Temporalio::Worker.new( env.client, task_queue: "tq-#{SecureRandom.uuid}", workflows: [SignalWorkflow] ) worker.run do handle = env.client.start_workflow( SignalWorkflow, id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue ) handle.signal(SignalWorkflow.some_signal) assert_equal 'got signal', handle.result end end end ``` But how would you test the timeout part? Like so: ```ruby def test_signal_workflow_timeout Temporalio::Testing::WorkflowEnvironment.start_time_skipping do |env| worker = Temporalio::Worker.new( env.client, task_queue: "tq-#{SecureRandom.uuid}", workflows: [SignalWorkflow] ) worker.run do handle = env.client.start_workflow( SignalWorkflow, id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue ) # Advance 50 seconds env.sleep(50) assert_equal 'got timeout', handle.result end end end ``` ### Mocking Activities When testing Workflows, often you don't want to actually run the Activities. Activities are just classes that extend `Temporalio::Activity::Definition`. Simply write different/empty/fake/asserting ones and pass those to the Worker to have different activities called during the test. ## Testing Activities {#test-activities} Unit testing an Activity or any code that could run in an Activity is done via the `Temporalio::Testing::ActivityEnvironment` class. Simply instantiate the class, and any code inside the block to `run` will be invoked inside the activity context. Several things about the activity environment can be customized via parameters when constructing the environment including setting the info, providing a proc to call back on each heartbeat, setting the cancellation to be used, etc. ## Replay test {#replay-test} Given a Workflow's history, it can be replayed locally to check for things like non-determinism errors. For example, assuming the `history_json` parameter below is given a JSON string of history exported from the CLI or web UI for workflow `MyWorkflow`, the following method will replay it: ```ruby def replay_from_json(history_json) # Create a replayer replayer = Temporalio::Worker::WorkflowReplayer.new(workflows: [MyWorkflow]) # Replay the history history = Temporalio::WorkflowHistory.from_history_json(history_json) replayer.replay_workflow(history) end ``` If there is a non-determinism, this will raise an exception. Workflow history can be loaded from more than just JSON. It can be fetched individually from a Workflow handle, or even in a list. For example, the following code will check that all Workflow histories for a certain Workflow type (i.e. workflow class) are safe with the current Workflow code. ```ruby --- # Create a replayer replayer = Temporalio::Worker::WorkflowReplayer.new(workflows: [MyWorkflow]) --- # Replay all workflows from a list replayer.replay_workflows(client.list_workflows("WorkflowType = 'MyWorkflow'")).each do |result| # Raise if any failed (could have just set raise_on_replay_failure: true, but this # demonstrates iterating over the results) raise result.replay_failure if result.replay_failure end ``` --- ## Versioning - Ruby SDK Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: - [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). The Worker Versioning feature allows you to tag your Workers and programmatically roll them out in versioned deployments, so that old Workers can run old code paths and new Workers can run new code paths. - [Versioning with Patching](#ruby-sdk-patching-api). This method works by adding branches to your code tied to specific revisions. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. ## Worker Versioning Temporal's [Worker Versioning](/production-deployment/worker-deployments/worker-versioning) feature allows you to tag your Workers and programmatically roll them out in Deployment Versions, so that old Workers can run old code paths and new Workers can run new code paths. This way, you can pin your Workflows to specific revisions, avoiding the need for patching. ## Versioning with Patching {#ruby-sdk-patching-api} ### Adding a patch A Patch defines a logical branch in a Workflow for a specific change, similar to a feature flag. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. When you want to make substantive code changes that may affect existing Workflow Executions, create a patch. Note that there's no need to patch [Pinned Workflows](/worker-versioning). Suppose you have an initial Workflow that runs `PrePatchActivity`: ```ruby class MyWorkflow < Temporalio::Workflow::Definition def execute result = Temporalio::Workflow.execute_activity( PrePatchActivity, start_to_close_timeout: 100 ) # ... end end ``` Now, you want to update your code to run `PostPatchActivity` instead. This represents your desired end state. ```ruby class MyWorkflow < Temporalio::Workflow::Definition def execute result = Temporalio::Workflow.execute_activity( PostPatchActivity, start_to_close_timeout: 100 ) # ... end end ``` The problem is that you cannot deploy this new revision directly until you're certain there are no more running Workflows created using the `PrePatchActivity` code, otherwise you are likely to cause a nondeterminism error. Instead, you'll need to use the [`patched`](https://ruby.temporal.io/Temporalio/Workflow.html#patched-class_method) function to check which version of the code should be executed. Patching is a three-step process: 1. Patch in any new, updated code using the `patched()` function. Run the new patched code alongside old code. 2. Remove old code and use `deprecate_patch()` to mark a particular patch as deprecated. 3. Once there are no longer any open Worklow Executions of the previous version of the code, remove `deprecatePatch()`. Let's walk through this process in sequence. ### Patching in new code Using `patched` inserts a marker into the Workflow History. During Replay, if a Worker encounters a history with that marker, it will fail the Workflow task when the Workflow code doesn't produce the same patch marker (in this case `my-patch`). This ensures you can safely deploy new code paths alongside the original branch. ```ruby class MyWorkflow < Temporalio::Workflow::Definition def execute if Temporalio::Workflow.patched('my-patch') result = Temporalio::Workflow.execute_activity( PostPatchActivity, start_to_close_timeout: 100 ) else result = Temporalio::Workflow.execute_activity( PrePatchActivity, start_to_close_timeout: 100 ) end # ... end end ``` ### Deprecating patches {#deprecated-patches} After ensuring that all Workflows started with `v1` code have left retention, you can [deprecate the patch](https://ruby.temporal.io/Temporalio/Workflow.html#deprecate_patch-class_method). Once your Workflows are no longer running the pre-patch code paths, you can deploy your code with `deprecate_patch()`. These Workers will be running the most up-to-date version of the Workflow code, which no longer requires the patch. The `deprecate_patch()` function works similarly to the `patched()` function by recording a marker in the Workflow history. This marker does not fail replay when Workflow code does not emit it. Deprecated patches serve as a bridge between the pre-patch code paths and the post-patch code paths, and are useful for avoiding errors resulting from patched code paths in your Workflow history. ```ruby class MyWorkflow < Temporalio::Workflow::Definition def execute Temporalio::Workflow.deprecate_patch('my-patch') result = Temporalio::Workflow.execute_activity( PostPatchActivity, start_to_close_timeout: 100 ) # ... end end ``` ### Removing a patch {#deploy-new-code} Once the pre-patch Workflows have left retention, you can then safely deploy Workers that no longer use either the `patched()` or `deprecate_patch()` calls: Patching allows you to make changes to currently running Workflows. It is a powerful method for introducing compatible changes without introducing non-determinism errors. ### Workflow cutovers To understand why Patching is useful, it's helpful to demonstrate cutting over an entire Workflow. Since incompatible changes only affect open Workflow Executions of the same type, you can avoid determinism errors by creating a whole new Workflow when making changes. To do this, you can copy the Workflow Definition function, giving it a different name, and register both names with your Workers. For example, you would duplicate `MyWorkflow` as `MyWorkflowV2`: ```ruby class MyWorkflow < Temporalio::Workflow::Definition def execute # ... end end class MyWorkflowV2 < Temporalio::Workflow::Definition def execute # ... end end ``` You would then need to update the Worker configuration, and any other identifier strings, to register both Workflow Types: ```ruby client = Temporalio::Client.connect('localhost:7233', 'default') worker = Temporalio::Worker.new( client:, task_queue: 'my-task-queue', workflows: [MyWorkflow, MyWorkflowV2] ) ``` The downside of this method is that it requires you to duplicate code and to update any commands used to start the Workflow. This can become impractical over time. This method also does not provide a way to version any still-running Workflows -- it is essentially just a cutover, unlike Patching. ### Testing a Workflow for replay safety To determine whether your Workflow your needs a patch, or that you've patched it successfully, you should incorporate [Replay Testing](/develop/ruby/testing-suite#replay-test). --- ## Safely deploying changes to Workflow code Making changes safely to existing Workflow code requires care. Your Workflow code--as opposed to your Activity code--must be [deterministic](/workflow-definition#deterministic-constraints). This means your changes to that code have to be as well. Changes to your Workflow code that qualify as non-deterministic need to be protected by either using [Worker Versioning](/production-deployment/worker-deployments/worker-versioning) to pin your Workflows to specific code revisions, or by using the [patching APIs](/workflow-definition#workflow-versioning) within your Workflow code. :::note We strongly recommend using Worker Versioning as users see improved error rates when adopting it. ::: In this article, we’ll provide some advice on how you can safely validate changes to your Workflow code, ensuring that you won’t experience unexpected non-determinism errors in production when rolling them out. :::caution Eager start does not respect Worker versioning. An eagerly started Workflow may run on any available local Worker even if that Worker is not the Current or Ramping version of its Worker deployment. ::: ## Use Replay Testing before and during your deployments The best way to verify that your code won’t cause non-determinism errors once deployed is to make use of [replay testing](/workflow-execution#replay). Replay testing takes one or more existing [Workflow Histories](/workflow-execution/event#event-history) that ran against a previous version of Workflow code and runs them against your _current_ Workflow code, verifying that it is compatible with the provided history. In the case of Worker Versioning, you may have a [pinned Workflow](/worker-versioning#pinned) that you're switching over to the [current Worker deployment version](/worker-versioning#versioning-definitions) and you want to make sure that the changes don't introduce non-determinism errors. Or you may have an [Auto-Upgrade Workflow](/worker-versioning#auto-upgrade) that you want to run automated tests on to ensure the deployments don't trigger errors. There are multiple points in your development lifecycle where running replay tests can make sense. They exist on a spectrum, with shortest time to feedback on one end, and most representative of a production deployment on the other. - During development, replay testing lets you get feedback as early as possible on whether your changes are compatible. For example, you might include some integration tests that run your Workflows against the Temporal Test Server to produce histories which you then check in. You can use those checked-in histories for replay tests to verify you haven’t made breaking changes. - During pre-deployment validation (such as during some automated deployment validation) you can get feedback in a more representative environment. For example, you might fetch histories from a live Temporal environment (whether production or some kind of pre-production) and use them in replay tests. - At deployment time, your environment _is_ production, but you are using the new code to replay recent real-world Workflow histories. When you're writing changes to Workflow code, you can fetch some representative histories from your pre-production or production Temporal environment and verify they work with your changes. You can do the same with the pre-merge CI pipeline. However, if you are using encrypted Payloads, which is a typical and recommended setup in production, you may not be able to decrypt the fetched histories. Additionally if your Workflows contain any PII (which should be encrypted), make sure this information is scrubbed for the purposes of your tests, or err on the side of caution and don’t use this method. With that constraint in mind, we’ll focus on how you can perform replay tests in a production deployment of a Worker with new Workflow code. The core of how replay testing is done is the same regardless of when you choose to do it, so you can apply some of the lessons here to earlier stages in your development process. ## Implement a deployment-time replay test The key to a successful safe deployment is to break it into two phases: a verification phase, where you’ll run the replay test, followed by the actual deployment of your new Worker code. You can accomplish this by wrapping your Worker application with some code that can choose whether it will run in verification mode, or in production. This is most easily done if you do not deploy your Workers side-by-side with other application code, which is a recommended best practice. If you do deploy your Workers as part of some other application, you will likely need to separate out a different entry point specifically for verification. ### Run a replay and real Worker with the same code The following code demonstrates how the same entry point could be used to either verify the new code using replay testing, or to actually run the Worker. ```python from datetime import datetime, timedelta from temporalio.client import Client from temporalio.worker import Worker, Replayer async def main(): parser = argparse.ArgumentParser(prog='MyTemporalWorker') parser.add_argument('mode', choices=['verify', 'run']) args = parser.parse_args() temporal_url = "localhost:7233" task_queue = "your-task-queue" my_workflows = [YourWorkflow] my_activities = [your_activity] client = await Client.connect(temporal_url) ``` Everything up to this point is standard. You import the Workflow and Activity code, instantiate a parser with two modes, and create your Task Queue, Workflow, and Activity. You can pass in the `args.mode` from any appropriate spot in your code. If the mode is set to `verify`, you conduct the replay testing by specifying the time period to test, and passing in the Workflows corresponding to that time period. Note that the Workflows are consumed as histories, using [the `map_histories()` function](https://python.temporal.io/temporalio.client.WorkflowExecutionAsyncIterator.html#map_histories). ```python if args.mode == 'verify': start_time = (datetime.now() - timedelta(hours=10)).isoformat(timespec='seconds') workflows = client.list_workflows( f"TaskQueue={task_queue} and StartTime > '{start_time}'", limit = 100) histories = workflows.map_histories() replayer = Replayer( workflows=my_workflows, ) await replayer.replay_workflows(histories) return ``` If any of the Workflows fail to replay, an error will be thrown. If no errors occur, you can return successfully to indicate success here, or communicate with an endpoint you've defined to indicate success or failure of the verification. You could switch to the `run` mode, and have this Worker transition to a real Worker that will start pulling from the Task Queue and processing Workflows: ```python else: worker = Worker( client, task_queue=task_queue, workflows=my_workflows, activities=my_activities, ) await worker.run() if __name__ == "__main__": asyncio.run(main()) ``` ### Use the multi-modal Worker The most straightforward way to use this bimodal Worker is to deploy one instance of it at the beginning of your deployment process in verify mode, see that it passes, and then proceed to deploy the rest of your new workers in run mode. --- ## Asynchronous Activity Completion - TypeScript SDK ## How to asynchronously complete an Activity {#asynchronous-activity-completion} [Asynchronous Activity Completion](/activity-execution#asynchronous-activity-completion) enables the Activity Function to return without the Activity Execution completing. There are three steps to follow: 1. The Activity provides the external system with identifying information needed to complete the Activity Execution. Identifying information can be a [Task Token](/activity-execution#task-token), or a combination of Namespace, Workflow Id, and Activity Id. 2. The Activity Function completes in a way that identifies it as waiting to be completed by an external system. 3. The Temporal Client is used to Heartbeat and complete the Activity. To asynchronously complete an Activity, call [`AsyncCompletionClient.complete`](https://typescript.temporal.io/api/classes/client.AsyncCompletionClient#complete). [activities-examples/src/activities/async-completion.ts](https://github.com/temporalio/samples-typescript/blob/main/activities-examples/src/activities/async-completion.ts) ```ts export async function doSomethingAsync(): Promise { const taskToken = activityInfo().taskToken; setTimeout(() => doSomeWork(taskToken), 1000); throw new CompleteAsyncError(); } // this work could be done in a different process or on a different machine async function doSomeWork(taskToken: Uint8Array): Promise { const client = new AsyncCompletionClient(); // does some work... await client.complete(taskToken, "Job's done!"); } ``` ## Local Activities {#local-activities} To call [Local Activities](/local-activity) in TypeScript, use [`proxyLocalActivities`](https://typescript.temporal.io/api/namespaces/workflow/#proxylocalactivities). ```ts const { getEnvVar } = workflow.proxyLocalActivities({ startToCloseTimeout: '2 seconds', }); export async function yourWorkflow(): Promise { const someSetting = await getEnvVar('SOME_SETTING'); // ... } ``` Local Activities must be registered with the Worker the same way non-local Activities are. --- ## Benign exceptions - TypeScript SDK **How to mark an Activity error as benign using the Temporal TypeScript SDK** When Activities throw errors that are expected or not severe, they can create noise in your logs, metrics, and OpenTelemetry traces, making it harder to identify real issues. By marking these errors as benign, you can exclude them from your observability data while still handling them in your Workflow logic. To mark an error as benign, set the `category` field to `ApplicationFailureCategory.BENIGN` when creating an [`ApplicationFailure`](https://typescript.temporal.io/api/classes/common.ApplicationFailure). Benign errors: - Have Activity failure logs downgraded to DEBUG level - Do not emit Activity failure metrics - Do not set the OpenTelemetry failure status to ERROR ```typescript ApplicationFailure, ApplicationFailureCategory, } from '@temporalio/common'; export async function myActivity(): Promise { try { return await callExternalService(); } catch (err) { const message = err instanceof Error ? err.message : String(err); throw ApplicationFailure.create({ message, // Mark this error as benign since it's expected category: ApplicationFailureCategory.BENIGN, }); } } ``` Use benign exceptions for Activity errors that occur regularly as part of normal operations, such as polling an external service that isn't ready yet, or handling expected transient failures that will be retried. --- ## Interrupt a Workflow - TypeScript SDK ## Cancellation scopes in Typescript {#cancellation-scopes} In the TypeScript SDK, Workflows are represented internally by a tree of cancellation scopes, each with cancellation behaviors you can specify. By default, everything runs in the "root" scope. Scopes are created using the [CancellationScope](https://typescript.temporal.io/api/classes/workflow.CancellationScope) constructor or one of three static helpers: - [cancellable(fn)](https://typescript.temporal.io/api/classes/workflow.CancellationScope#cancellable-1): Children are automatically cancelled when their containing scope is cancelled. - Equivalent to `new CancellationScope().run(fn)`. - [nonCancellable(fn)](https://typescript.temporal.io/api/classes/workflow.CancellationScope#noncancellable): Cancellation does not propagate to children. - Equivalent to `new CancellationScope({ cancellable: false }).run(fn)`. - [withTimeout(timeoutMs, fn)](https://typescript.temporal.io/api/classes/workflow.CancellationScope#withtimeout): If a timeout triggers before `fn` resolves, the scope is cancelled, triggering cancellation of any enclosed operations, such as Activities and Timers. - Equivalent to `new CancellationScope({ cancellable: true, timeout: timeoutMs }).run(fn)`. Cancellations are applied to cancellation scopes, which can encompass an entire Workflow or just part of one. Scopes can be nested, and cancellation propagates from outer scopes to inner ones. A Workflow's `main` function runs in the outermost scope. Cancellations are handled by catching `CancelledFailure`s thrown by cancelable operations. `CancellationScope.run()` and the static helpers mentioned earlier return native JavaScript promises, so you can use the familiar Promise APIs like `Promise.all` and `Promise.race` to model your asynchronous logic. You can also use the following APIs: - `CancellationScope.current()`: Get the current scope. - `scope.cancel()`: Cancel all operations inside a `scope`. - `scope.run(fn)`: Run an async function within a `scope` and return the result of `fn`. - `scope.cancelRequested`: A promise that resolves when a scope cancellation is requested, such as when Workflow code calls `cancel()` or the entire Workflow is cancelled by an external client. When a `CancellationScope` is cancelled, it propagates cancellation in any child scopes and of any cancelable operations created within it, such as the following: - Activities - Timers (created with the [sleep](https://typescript.temporal.io/api/namespaces/workflow#sleep) function) - [Triggers](https://typescript.temporal.io/api/classes/workflow.Trigger) ### CancelledFailure Timers and triggers throw [CancelledFailure](https://typescript.temporal.io/api/classes/common.CancelledFailure) when cancelled; Activities and Child Workflows throw `ActivityFailure` and `ChildWorkflowFailure` with cause set to `CancelledFailure`. One exception is when an Activity or Child Workflow is scheduled in an already cancelled scope (or Workflow). In this case, they propagate the `CancelledFailure` that was thrown to cancel the scope. To simplify checking for cancellation, use the [isCancellation(err)](https://typescript.temporal.io/api/namespaces/workflow#iscancellation) function. ### Internal cancellation example [packages/test/src/workflows/cancel-timer-immediately.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/cancel-timer-immediately.ts) ```ts export async function cancelTimer(): Promise { // Timers and Activities are automatically cancelled when their containing scope is cancelled. try { await CancellationScope.cancellable(async () => { const promise = sleep(1); // <-- Will be cancelled because it is attached to this closure's scope CancellationScope.current().cancel(); await promise; // <-- Promise must be awaited in order for `cancellable` to throw }); } catch (e) { if (e instanceof CancelledFailure) { console.log('Timer cancelled 👍'); } else { throw e; // <-- Fail the workflow } } } ``` Alternatively, the preceding can be written as the following. [packages/test/src/workflows/cancel-timer-immediately-alternative-impl.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/cancel-timer-immediately-alternative-impl.ts) ```ts export async function cancelTimerAltImpl(): Promise { try { const scope = new CancellationScope(); const promise = scope.run(() => sleep(1)); scope.cancel(); // <-- Cancel the timer created in scope await promise; // <-- Throws CancelledFailure } catch (e) { if (e instanceof CancelledFailure) { console.log('Timer cancelled 👍'); } else { throw e; // <-- Fail the workflow } } } ``` ### External cancellation example The following code shows how to handle Workflow cancellation by an external client while an Activity is running. {/* TODO: add a sample here of how this Workflow could be cancelled using a WorkflowHandle */} [packages/test/src/workflows/handle-external-workflow-cancellation-while-activity-running.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/handle-external-workflow-cancellation-while-activity-running.ts) ```ts const { httpPostJSON, cleanup } = proxyActivities({ startToCloseTimeout: '10m', }); export async function handleExternalWorkflowCancellationWhileActivityRunning(url: string, data: any): Promise { try { await httpPostJSON(url, data); } catch (err) { if (isCancellation(err)) { console.log('Workflow cancelled'); // Cleanup logic must be in a nonCancellable scope // If we'd run cleanup outside of a nonCancellable scope it would've been cancelled // before being started because the Workflow's root scope is cancelled. await CancellationScope.nonCancellable(() => cleanup(url)); } throw err; // <-- Fail the Workflow } } ``` ### nonCancellable example `CancellationScope.nonCancellable` prevents cancellation from propagating to children. [activities-cancellation-heartbeating/src/cancellation-scopes.ts](https://github.com/temporalio/samples-typescript/blob/main/activities-cancellation-heartbeating/src/cancellation-scopes.ts) ```ts export async function nonCancellable(url: string): Promise { // Prevent Activity from being cancelled and await completion. // Note that the Workflow is completely oblivious and impervious to cancellation in this example. return CancellationScope.nonCancellable(() => httpGetJSON(url)); } ``` ### withTimeout example A common operation is to cancel one or more Activities if a deadline elapses. `withTimeout` creates a `CancellationScope` that is automatically cancelled after a timeout. [packages/test/src/workflows/multiple-activities-single-timeout.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/multiple-activities-single-timeout.ts) ```ts export function multipleActivitiesSingleTimeout(urls: string[], timeoutMs: number): Promise { const { httpGetJSON } = proxyActivities({ startToCloseTimeout: timeoutMs, }); // If timeout triggers before all activities complete // the Workflow will fail with a CancelledError. return CancellationScope.withTimeout(timeoutMs, () => Promise.all(urls.map((url) => httpGetJSON(url)))); } ``` ### scope.cancelRequested You can await `cancelRequested` to make a Workflow aware of cancellation while waiting on `nonCancellable` scopes. [packages/test/src/workflows/cancel-requested-with-non-cancellable.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/cancel-requested-with-non-cancellable.ts) ```ts const { httpGetJSON } = proxyActivities({ startToCloseTimeout: '10m', }); export async function resumeAfterCancellation(url: string): Promise { let result: any = undefined; const scope = new CancellationScope({ cancellable: false }); const promise = scope.run(() => httpGetJSON(url)); try { result = await Promise.race([scope.cancelRequested, promise]); } catch (err) { if (!(err instanceof CancelledFailure)) { throw err; } // Prevent Workflow from completing so Activity can complete result = await promise; } return result; } ``` ### Cancellation scopes and callbacks Callbacks are not particularly useful in Workflows because all meaningful asynchronous operations return promises. In the rare case that code uses callbacks and needs to handle cancellation, a callback can consume the `CancellationScope.cancelRequested` promise. [packages/test/src/workflows/cancellation-scopes-with-callbacks.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/cancellation-scopes-with-callbacks.ts) ```ts function doSomething(callback: () => any) { setTimeout(callback, 10); } export async function cancellationScopesWithCallbacks(): Promise { await new Promise((resolve, reject) => { doSomething(resolve); CancellationScope.current().cancelRequested.catch(reject); }); } ``` ### Nesting cancellation scopes You can achieve complex flows by nesting cancellation scopes. [packages/test/src/workflows/nested-cancellation.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/nested-cancellation.ts) ```ts const { setup, httpPostJSON, cleanup } = proxyActivities({ startToCloseTimeout: '10m', }); export async function nestedCancellation(url: string): Promise { await CancellationScope.cancellable(async () => { await CancellationScope.nonCancellable(() => setup()); try { await CancellationScope.withTimeout(1000, () => httpPostJSON(url, { some: 'data' })); } catch (err) { if (isCancellation(err)) { await CancellationScope.nonCancellable(() => cleanup(url)); } throw err; } }); } ``` ### Sharing promises between scopes Operations like Timers and Activities are cancelled by the cancellation scope they were created in. Promises returned by these operations can be awaited in different scopes. [activities-cancellation-heartbeating/src/cancellation-scopes.ts](https://github.com/temporalio/samples-typescript/blob/main/activities-cancellation-heartbeating/src/cancellation-scopes.ts) ```ts export async function sharedScopes(): Promise { // Start activities in the root scope const p1 = httpGetJSON('http://url1.ninja'); const p2 = httpGetJSON('http://url2.ninja'); const scopePromise = CancellationScope.cancellable(async () => { const first = await Promise.race([p1, p2]); // Does not cancel activity1 or activity2 as they're linked to the root scope CancellationScope.current().cancel(); return first; }); return await scopePromise; // The Activity that did not complete will effectively be cancelled when // Workflow completes unless the Activity is awaited: // await Promise.all([p1, p2]); } ``` [activities-cancellation-heartbeating/src/cancellation-scopes.ts](https://github.com/temporalio/samples-typescript/blob/main/activities-cancellation-heartbeating/src/cancellation-scopes.ts) ```ts export async function shieldAwaitedInRootScope(): Promise { let p: Promise | undefined = undefined; await CancellationScope.nonCancellable(async () => { p = httpGetJSON('http://example.com'); // <-- Start activity in nonCancellable scope without awaiting completion }); // Activity is shielded from cancellation even though it is awaited in the cancellable root scope return p; } ``` ## Cancel an Activity from a Workflow {#cancel-an-activity} Canceling an Activity from within a Workflow requires that the Activity Execution sends Heartbeats and sets a Heartbeat Timeout. If the Heartbeat is not invoked, the Activity cannot receive a cancellation request. When any non-immediate Activity is executed, the Activity Execution should send Heartbeats and set a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) to ensure that the server knows it is still working. When an Activity is canceled, an error is raised in the Activity at the next available opportunity. If cleanup logic needs to be performed, it can be done in a `finally` clause or inside a caught cancel error. However, for the Activity to appear canceled the exception needs to be re-thrown. :::note Unlike regular Activities, [Local Activities](/local-activity) can be canceled if they don't send Heartbeats. Local Activities are handled locally, and all the information needed to handle the cancellation logic is available in the same Worker process. ::: ## Reset a Workflow Execution {#reset} Resetting a Workflow Execution terminates the current Workflow Execution and starts a new Workflow Execution from a point you specify in its Event History. Use reset when a Workflow is blocked due to a non-deterministic error or other issues that prevent it from completing. When you reset a Workflow, the Event History up to the reset point is copied to the new Workflow Execution, and the Workflow resumes from that point with the current code. Reset only works if you've fixed the underlying issue, such as removing non-deterministic code. Any progress made after the reset point will be discarded. Provide a reason when resetting, as it will be recorded in the Event History. 1. Navigate to the Workflow Execution details page, 2. Click the **Reset** button in the top right dropdown menu, 3. Select the Event ID to reset to, 4. Provide a reason for the reset, 5. Confirm the reset. The Web UI shows available reset points and creates a link to the new Workflow Execution after the reset completes. Use the `temporal workflow reset` command to reset a Workflow Execution: ```bash temporal workflow reset \ --workflow-id \ --event-id \ --reason "Reason for reset" ``` For example: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" ``` By default, the command resets the latest Workflow Execution in the `default` Namespace. Use `--run-id` to reset a specific run. Use `--namespace` to specify a different Namespace: ```bash temporal workflow reset \ --workflow-id my-background-check \ --event-id 4 \ --reason "Fixed non-deterministic code" \ --namespace my-namespace \ --tls-cert-path /path/to/cert.pem \ --tls-key-path /path/to/key.pem ``` Monitor the new Workflow Execution after resetting to ensure it completes successfully. --- ## Child Workflows - TypeScript SDK ## How to start a Child Workflow Execution {#child-workflows} A [Child Workflow Execution](/child-workflows) is a Workflow Execution that is scheduled from within another Workflow using a Child Workflow API. When using a Child Workflow API, Child Workflow–related Events (such as [StartChildWorkflowExecutionInitiated](/references/events#startchildworkflowexecutioninitiated), [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted), and [ChildWorkflowExecutionCompleted](/references/events#childworkflowexecutioncompleted)) are logged in the Event History of the Child Workflow Execution. Always block progress until the [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted) Event is logged to the Event History to ensure the Child Workflow Execution has started. After that, Child Workflow Executions can be abandoned by using the `Abandon` [Parent Close Policy](/parent-close-policy) set in the Child Workflow Options. To be sure that the Child Workflow Execution has started, first call the Child Workflow Execution method on the instance of Child Workflow future, which returns a different future. Then get the value of an object that acts as a proxy for a result that is initially unknown, which is what waits until the Child Workflow Execution has spawned. To start a Child Workflow Execution and return a [handle](https://typescript.temporal.io/api/interfaces/workflow.ChildWorkflowHandle/) to it, use [startChild](https://typescript.temporal.io/api/namespaces/workflow/#startchild). ```ts export async function parentWorkflow(names: string[]) { const childHandle = await startChild(childWorkflow, { args: [name], // workflowId, // add business-meaningful workflow id here // // regular workflow options apply here, with two additions (defaults shown): // cancellationType: ChildWorkflowCancellationType.WAIT_CANCELLATION_COMPLETED, // parentClosePolicy: ParentClosePolicy.PARENT_CLOSE_POLICY_TERMINATE }); // you can use childHandle to signal or get result here await childHandle.signal('anySignal'); const result = childHandle.result(); // you can use childHandle to signal, query, cancel, terminate, or get result here } ``` To start a Child Workflow Execution and await its completion, use [executeChild](https://typescript.temporal.io/api/namespaces/workflow/#executechild). By default, a child is scheduled on the same Task Queue as the parent. [child-workflows/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/child-workflows/src/workflows.ts) ```ts export async function parentWorkflow(...names: string[]): Promise { const responseArray = await Promise.all( names.map((name) => executeChild(childWorkflow, { args: [name], // workflowId, // add business-meaningful workflow id here // // regular workflow options apply here, with two additions (defaults shown): // cancellationType: ChildWorkflowCancellationType.WAIT_CANCELLATION_COMPLETED, // parentClosePolicy: ParentClosePolicy.PARENT_CLOSE_POLICY_TERMINATE }), ), ); return responseArray.join('\n'); } ``` To control any running Workflow from inside a Workflow, use [getExternalWorkflowHandle(workflowId)](https://typescript.temporal.io/api/namespaces/workflow/#getexternalworkflowhandle). ```ts export async function terminateWorkflow() { const { workflowId } = workflowInfo(); // no await needed const handle = getExternalWorkflowHandle(workflowId); // sync function, not async await handle.cancel(); } ``` If the Child Workflow options aren't explicitly set, they inherit their values from the Parent Workflow options. Two advanced options are unique to Child Workflows: - [cancellationType](https://typescript.temporal.io/api/enums/proto.coresdk.child_workflow.ChildWorkflowCancellationType): Controls when to throw the `CanceledFailure` exception when a Child Workflow is canceled. - `parentClosePolicy`: Explained in the next section. If you need to cancel a Child Workflow Execution, use [cancellation scopes](/develop/typescript/core-application#cancellation-scopes). A Child Workflow Execution is automatically cancelled when its containing scope is cancelled. ### How to set a Parent Close Policy {#parent-close-policy} A [Parent Close Policy](/parent-close-policy) determines what happens to a Child Workflow Execution if its Parent changes to a Closed status (Completed, Failed, or Timed Out). The default Parent Close Policy option is set to terminate the Child Workflow Execution. To specify how a Child Workflow reacts to a Parent Workflow reaching a Closed state, use the [`parentClosePolicy`](https://typescript.temporal.io/api/interfaces/workflow.ChildWorkflowOptions#parentclosepolicy) option. [child-workflows/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/child-workflows/src/workflows.ts) ```ts export async function parentWorkflow(...names: string[]): Promise { const responseArray = await Promise.all( names.map((name) => executeChild(childWorkflow, { args: [name], // workflowId, // add business-meaningful workflow id here // // regular workflow options apply here, with two additions (defaults shown): // cancellationType: ChildWorkflowCancellationType.WAIT_CANCELLATION_COMPLETED, // parentClosePolicy: ParentClosePolicy.PARENT_CLOSE_POLICY_TERMINATE }), ), ); return responseArray.join('\n'); } ``` --- ## Continue-As-New - Typescript SDK This page answers the following questions for Typescript developers: - [What is Continue-As-New?](#what) - [How to Continue-As-New?](#how) - [When is it right to Continue-as-New?](#when) - [How to test Continue-as-New?](#how-to-test) ## What is Continue-As-New? {#what} [Continue-As-New](/workflow-execution/continue-as-new) lets a Workflow Execution close successfully and creates a new Workflow Execution. You can think of it as a checkpoint when your Workflow gets too long or approaches certain scaling limits. The new Workflow Execution is in the same [chain](/workflow-execution#workflow-execution-chain); it keeps the same Workflow Id but gets a new Run Id and a fresh Event History. It also receives your Workflow's usual parameters. ## How to Continue-As-New using the Typescript SDK {#how} First, design your Workflow parameters so that you can pass in the "current state" when you Continue-As-New into the next Workflow run. This state is typically set to `None` for the original caller of the Workflow. View the source code {' '} in the context of the rest of the application code. ```typescript export interface ClusterManagerInput { state?: ClusterManagerState; } export async function clusterManagerWorkflow(input: ClusterManagerInput = {}): Promise { ```` The test hook in the above snippet is covered [below](#how-to-test). Inside your Workflow, call the [`continueAsNew()`](https://typescript.temporal.io/api/namespaces/workflow#continueasnew) function with the same type. This stops the Workflow right away and starts a new one. View the source code {' '} in the context of the rest of the application code. ```typescript return await wf.continueAsNew({ state: manager.getState(), testContinueAsNew: input.testContinueAsNew }); ```` ### Considerations for Workflows with Message Handlers {#with-message-handlers} If you use Updates or Signals, don't call Continue-as-New from the handlers. Instead, wait for your handlers to finish in your main Workflow before you run `ContinueAsNew`. See the [`allHandlersFinished`](message-passing#wait-for-message-handlers) example for guidance. ## When is it right to Continue-as-New using the Typescript SDK? {#when} Use Continue-as-New when your Workflow might hit [Event History Limits](/workflow-execution/event#event-history). Temporal tracks your Workflow's progress against these limits to let you know when you should Continue-as-New. Call `wf.workflowInfo().continueAsNewSuggested` to check if it's time. ## How to test Continue-as-New using the Typescript SDK {#how-to-test} Testing Workflows that naturally Continue-as-New may be time-consuming and resource-intensive. Instead, add a test hook to check your Workflow's Continue-as-New behavior faster in automated tests. For example, when `testContinueAsNew == true`, this sample creates a test-only variable called `this.maxHistoryLength` and sets it to a small value. A helper method in the Workflow checks it each time it considers using Continue-as-New: View the source code {' '} in the context of the rest of the application code. ```typescript shouldContinueAsNew(): boolean { if (wf.workflowInfo().continueAsNewSuggested) { return true; } // This is just for ease-of-testing. In production, we trust temporal to tell us when to continue-as-new. if (this.maxHistoryLength !== undefined && wf.workflowInfo().historyLength > this.maxHistoryLength) { return true; } return false; } ``` --- ## Converters and encryption - TypeScript SDK ## Payload Converter and Payload Codec Summary This section summarizes the difference between a Payload Converter and Payload Codec. ### Payload Converter Payload Converters are responsible for serializing application objects into a Payload and deserializing them back into application objects. A Payload, in this context, is a binary form suitable for network transmission that may include some metadata. This serialization process transforms an object (like those in JSON or Protobuf formats) into a binary format and vice versa. For example, an object might be serialized to JSON with UTF-8 byte encoding or to a protobuf binary using a specific set of protobuf message definitions. Due to their operation within the Workflow context, Payload Converters run inside the Workflow sandbox. Consequently, Payload Converters cannot access external services or employ non-deterministic modules, which excludes most types of encryption due to their non-deterministic nature. ### Payload Codec Payload Codecs transform one Payload into another, converting binary data to a different binary format. Unlike Payload Converters, Payload Codecs do not operate within the Workflow sandbox. This allows them to execute operations that can include calls to remote services and the use of non-deterministic modules, which are critical for tasks such as encrypting Payloads, compressing data, or offloading large payloads to an object store. Payload Codecs can also be implemented as a Codec Server (which will be described later on). ### Operational Chain In practice, these two components operate in a chain to handle data securely. Incoming data first passes through a Payload Converter through the `toPayload` method, turning application objects into Payloads. These Payloads are then processed by the Payload Codec through the `encode` method, which adjusts the Payload according to the required security or efficiency needs before it is sent to the Temporal Cluster. The process is symmetric for outgoing data. Payloads retrieved from the Temporal Cluster first pass through the Payload Codec through the `decode` method, which reverses any transformations applied during encoding. Finally, the resulting Payload is converted back into an application object by the Payload Converter through the `fromPayload` method, making it ready for use within the application. ## Payload Codec > API documentation: [PayloadCodec](https://typescript.temporal.io/api/interfaces/common.PayloadCodec) The default `PayloadCodec` does nothing. To create a custom one, you can implement the following interface: ```ts interface PayloadCodec { /** * Encode an array of {@link Payload}s for sending over the wire. * @param payloads May have length 0. */ encode(payloads: Payload[]): Promise; /** * Decode an array of {@link Payload}s received from the wire. */ decode(payloads: Payload[]): Promise; } ``` ## Use custom payload conversion Temporal SDKs provide a [Payload Converter](/payload-converter) that can be customized to convert a custom data type to a [Payload](/dataconversion#payload) and back. The order in which your encoding Payload Converters are applied depending on the order given to the Data Converter. You can set multiple encoding Payload Converters to run your conversions. When the Data Converter receives a value for conversion, the value gets passes through each Payload Converter in sequence until the converter that handles the data type does the conversion. ## Composite Data Converters Use a [Composite Data Converter](https://typescript.temporal.io/api/classes/common.CompositePayloadConverter) to apply custom, type-specific Payload Converters in a specified order. Defining a new Composite Data Converter is not always necessary to implement custom data handling. You can override the default Converter with a custom Codec, but a Composite Data Converter may be necessary for complex Workflow logic. A Composite Data Converter can include custom rules created, and it can also leverage the default Data Converters built into Temporal. In fact, the default Data Converter logic is implemented internally in the Temporal source as a Composite Data Converter. It defines these rules in this order: ```typescript export class DefaultPayloadConverter extends CompositePayloadConverter { constructor() { super( new UndefinedPayloadConverter(), new BinaryPayloadConverter(), new JsonPayloadConverter(), ); } } ``` The order of applying the Payload Converters is important. During serialization, the Data Converter tries the Payload Converters in that specific order until a Payload Converter returns a non-null Payload. To replace the default Data Converter with a custom `CompositeDataConverter`, use the following: ```typescript export const payloadConverter = new CompositePayloadConverter( new UndefinedPayloadConverter(), new EjsonPayloadConverter(), ); ``` You can do this in its own `payload-conterter.ts` file for example. In the code snippet above, a converter is created that first attempts to handle `null` and `undefined` values. If the value isn't `null` or `undefined`, the EJSON serialization logic written in the `EjsonPayloadConverter` is then used. The Payload Converter is then provided to the Worker and Client. Here is the Worker code: ```typescript const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), taskQueue: 'ejson', dataConverter: { payloadConverterPath: require.resolve('./payload-converter'), }, }); ``` With this code, you now ensure that the Worker serializes and deserializes Workflow and Activity inputs and outputs using your EJSON-based logic, along with handling undefined values appropriately. Here is the Client: ```typescript const client = new Client({ dataConverter: { payloadConverterPath: require.resolve('./payload-converter'), }, }); ``` You can now use a variety of data types in arguments. ## How to use a custom payload converter in TypeScript {#custom-payload-conversion} To support custom Payload conversion, create a [custom Payload Converter](/payload-converter#composite-data-converters) and configure the Data Converter to use it in your Client options. You can use Custom Payload Converters to change how application objects get serialized to binary Payload. To handle custom data types that are not natively JSON-serializable (e.g., `BigInt`, `Date`, or binary data), you can create a custom Payload Converter. A Custom Payload Converter is responsible for converting your custom data types to a payload format that Temporal can manage. To implement a Custom Payload Converter in TypeScript, you need to do the following steps: 1. **Implement `PayloadConverter` Interface**: Start by creating a class that implements Temporal's [`PayloadConverter`](https://typescript.temporal.io/api/interfaces/common.PayloadConverter) interface. ```typescript interface PayloadConverter { /** * Converts a value to a {@link Payload}. * @param value The value to convert. Example values include the Workflow args sent by the client and the values returned by a Workflow or Activity. */ toPayload(value: T): Payload; /** * Converts a {@link Payload} back to a value. */ fromPayload(payload: Payload): T; } ``` This custom converter should include logic for both serialization (`toPayload`) and deserialization (`fromPayload`), handling your specific data types or serialization format. The method `toPayload` returns a Payload object, which is used to manage and transport serialized data. The method `fromPayload` returns the deserialized data. This ensures that the data returned is in the same format as it was before serialization, allowing it to be used directly in the application. 2. Configure the Data Converter. To send values that are not JSON-serializable like a `BigInt` or `Date`, provide the custom Data Converter to the Client and Worker as described in the [Composite Data Converters](#composite-data-converters) section. #### Custom implementation Some example implementations are in the SDK itself: - [common/src/converter/payload-converter.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/common/src/converter/payload-converter.ts) - [common/src/converter/protobuf-payload-converters.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/common/src/converter/protobuf-payload-converters.ts) The sample project [samples-typescript/ejson](https://github.com/temporalio/samples-typescript/tree/main/ejson) creates an EJSON custom `PayloadConverter`. It implements `PayloadConverterWithEncoding` instead of `PayloadConverter` so that it could be used with [CompositePayloadConverter](https://typescript.temporal.io/api/classes/common.CompositePayloadConverter/): [ejson/src/ejson-payload-converter.ts](https://github.com/temporalio/samples-typescript/blob/main/ejson/src/ejson-payload-converter.ts) ```ts EncodingType, METADATA_ENCODING_KEY, Payload, PayloadConverterWithEncoding, PayloadConverterError, } from '@temporalio/common'; /** * Converts between values and [EJSON](https://docs.meteor.com/api/ejson.html) Payloads. */ export class EjsonPayloadConverter implements PayloadConverterWithEncoding { // Use 'json/plain' so that Payloads are displayed in the UI public encodingType = 'json/plain' as EncodingType; public toPayload(value: unknown): Payload | undefined { if (value === undefined) return undefined; let ejson; try { ejson = EJSON.stringify(value); } catch (e) { throw new UnsupportedEjsonTypeError( `Can't run EJSON.stringify on this value: ${value}. Either convert it (or its properties) to EJSON-serializable values (see https://docs.meteor.com/api/ejson.html ), or create a custom data converter. EJSON.stringify error message: ${errorMessage( e, )}`, e as Error, ); } return { metadata: { [METADATA_ENCODING_KEY]: encode('json/plain'), // Include an additional metadata field to indicate that this is an EJSON payload format: encode('extended'), }, data: encode(ejson), }; } public fromPayload(content: Payload): T { return content.data ? EJSON.parse(decode(content.data)) : content.data; } } export class UnsupportedEjsonTypeError extends PayloadConverterError { public readonly name: string = 'UnsupportedJsonTypeError'; constructor( message: string | undefined, public readonly cause?: Error, ) { super(message ?? undefined); } } ``` Then we instantiate one and export it: [ejson/src/payload-converter.ts](https://github.com/temporalio/samples-typescript/blob/main/ejson/src/payload-converter.ts) ```ts export const payloadConverter = new CompositePayloadConverter( new UndefinedPayloadConverter(), new EjsonPayloadConverter(), ); ``` We provide it to the Worker and Client: [ejson/src/worker.ts](https://github.com/temporalio/samples-typescript/blob/main/ejson/src/worker.ts) ```ts const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), taskQueue: 'ejson', dataConverter: { payloadConverterPath: require.resolve('./payload-converter') }, }); ``` [ejson/src/client.ts](https://github.com/temporalio/samples-typescript/blob/main/ejson/src/client.ts) ```ts const client = new Client({ connection, dataConverter: { payloadConverterPath: require.resolve('./payload-converter') }, }); ``` Then we can use supported data types in arguments: [ejson/src/client.ts](https://github.com/temporalio/samples-typescript/blob/main/ejson/src/client.ts) ```ts const user: User = { id: uuid(), // age: 1000n, BigInt isn't supported hp: Infinity, matcher: /.*Stormblessed/, token: Uint8Array.from([1, 2, 3]), createdAt: new Date(), }; const handle = await client.workflow.start(example, { args: [user], taskQueue: 'ejson', workflowId: `example-user-${user.id}`, }); ``` And they get parsed correctly for the Workflow: [ejson/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/ejson/src/workflows.ts) ```ts export async function example(user: User): Promise { const success = user.createdAt.getTime() < Date.now() && user.hp > 50 && user.matcher.test('Kaladin Stormblessed') && user.token instanceof Uint8Array; return { success, at: new Date() }; } ``` #### Protobufs To serialize values as [Protocol Buffers](https://protobuf.dev/) (protobufs): - Use [protobufjs](https://protobufjs.github.io/protobuf.js/). - Use runtime-loaded messages (not generated classes) and `MessageClass.create` (not `new MessageClass()`). - Generate `json-module.js` with a command like the following: ```sh pbjs -t json-module --workflow-id commonjs -o protos/json-module.js protos/*.proto ``` - Patch `json-module.js`: [protobufs/protos/root.js](https://github.com/temporalio/samples-typescript/blob/main/protobufs/protos/root.js) ```js const { patchProtobufRoot } = require('@temporalio/common/lib/protobufs'); const unpatchedRoot = require('./json-module'); module.exports = patchProtobufRoot(unpatchedRoot); ``` - Generate `root.d.ts` with the following command: ```sh pbjs -t static-module protos/*.proto | pbts -o protos/root.d.ts - ``` - Create a [`DefaultPayloadConverterWithProtobufs`](https://typescript.temporal.io/api/classes/protobufs.DefaultPayloadConverterWithProtobufs/): [protobufs/src/payload-converter.ts](https://github.com/temporalio/samples-typescript/blob/main/protobufs/src/payload-converter.ts) ```ts export const payloadConverter = new DefaultPayloadConverterWithProtobufs({ protobufRoot: root }); ``` Alternatively, we can use Protobuf Payload Converters directly, or with other converters. If we know that we only use Protobuf objects, and we want them binary encoded (which saves space over proto3 JSON, but can't be viewed in the Web UI), we could do the following: ```ts export const payloadConverter = new ProtobufBinaryPayloadConverter(root); ``` Similarly, if we wanted binary-encoded Protobufs in addition to the other default types, we could do the following: ```ts BinaryPayloadConverter, CompositePayloadConverter, JsonPayloadConverter, UndefinedPayloadConverter, } from '@temporalio/common'; export const payloadConverter = new CompositePayloadConverter( new UndefinedPayloadConverter(), new BinaryPayloadConverter(), new ProtobufBinaryPayloadConverter(root), new JsonPayloadConverter(), ); ``` - Provide it to the Worker: [protobufs/src/worker.ts](https://github.com/temporalio/samples-typescript/blob/main/protobufs/src/worker.ts) ```ts const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), activities, taskQueue: 'protobufs', dataConverter: { payloadConverterPath: require.resolve('./payload-converter') }, }); ``` [WorkerOptions.dataConverter](https://typescript.temporal.io/api/interfaces/worker.WorkerOptions#dataconverter) - Provide it to the Client: [protobufs/src/client.ts](https://github.com/temporalio/samples-typescript/blob/main/protobufs/src/client.ts) ```ts async function run() { const config = loadClientConnectConfig(); const connection = await Connection.connect(config.connectionOptions); const client = new Client({ connection, dataConverter: { payloadConverterPath: require.resolve('./payload-converter') }, }); const handle = await client.workflow.start(example, { args: [foo.bar.ProtoInput.create({ name: 'Proto', age: 2 })], // can't do: // args: [new foo.bar.ProtoInput({ name: 'Proto', age: 2 })], taskQueue: 'protobufs', workflowId: 'my-business-id-' + uuid(), }); console.log(`Started workflow ${handle.workflowId}`); const result: ProtoResult = await handle.result(); console.log(result.toJSON()); } ``` - Use protobufs in your Workflows and Activities: [protobufs/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/protobufs/src/workflows.ts) ```ts const { protoActivity } = proxyActivities({ startToCloseTimeout: '1 minute', }); export async function example(input: foo.bar.ProtoInput): Promise { const result = await protoActivity(input); return result; } ``` [protobufs/src/activities.ts](https://github.com/temporalio/samples-typescript/blob/main/protobufs/src/activities.ts) ```ts export async function protoActivity(input: foo.bar.ProtoInput): Promise { return ProtoResult.create({ sentence: `${input.name} is ${input.age} years old.` }); } ``` #### Encryption > Background: [Encryption](/payload-codec#encryption) The following is an example class that implements the `PayloadCodec` interface: [encryption/src/encryption-codec.ts](https://github.com/temporalio/samples-typescript/blob/main/encryption/src/encryption-codec.ts) ```ts const ENCODING = 'binary/encrypted'; const METADATA_ENCRYPTION_KEY_ID = 'encryption-key-id'; export class EncryptionCodec implements PayloadCodec { constructor( protected readonly keys: Map, protected readonly defaultKeyId: string, ) {} static async create(keyId: string): Promise { const keys = new Map(); keys.set(keyId, await fetchKey(keyId)); return new this(keys, keyId); } async encode(payloads: Payload[]): Promise { return Promise.all( payloads.map(async (payload) => ({ metadata: { [METADATA_ENCODING_KEY]: encode(ENCODING), [METADATA_ENCRYPTION_KEY_ID]: encode(this.defaultKeyId), }, // Encrypt entire payload, preserving metadata data: await encrypt( temporal.api.common.v1.Payload.encode(payload).finish(), this.keys.get(this.defaultKeyId)!, // eslint-disable-line @typescript-eslint/no-non-null-assertion ), })), ); } async decode(payloads: Payload[]): Promise { return Promise.all( payloads.map(async (payload) => { if (!payload.metadata || decode(payload.metadata[METADATA_ENCODING_KEY]) !== ENCODING) { return payload; } if (!payload.data) { throw new ValueError('Payload data is missing'); } const keyIdBytes = payload.metadata[METADATA_ENCRYPTION_KEY_ID]; if (!keyIdBytes) { throw new ValueError('Unable to decrypt Payload without encryption key id'); } const keyId = decode(keyIdBytes); let key = this.keys.get(keyId); if (!key) { key = await fetchKey(keyId); this.keys.set(keyId, key); } const decryptedPayloadBytes = await decrypt(payload.data, key); console.log('Decrypting payload.data:', payload.data); return temporal.api.common.v1.Payload.decode(decryptedPayloadBytes); }), ); } } async function fetchKey(_keyId: string): Promise { // In production, fetch key from a key management system (KMS). You may want to memoize requests if you'll be decoding // Payloads that were encrypted using keys other than defaultKeyId. const key = Buffer.from('test-key-test-key-test-key-test!'); const cryptoKey = await crypto.subtle.importKey( 'raw', key, { name: 'AES-GCM', }, true, ['encrypt', 'decrypt'], ); return cryptoKey; } ``` The encryption and decryption code is in [src/crypto.ts](https://github.com/temporalio/samples-typescript/tree/main/encryption/src/crypto.ts). Because encryption is CPU intensive, and doing AES with the crypto module built into Node.js blocks the main thread, we use `@ronomon/crypto-async`, which uses the Node.js thread pool. As before, we provide a custom Data Converter to the Client and Worker: [encryption/src/client.ts](https://github.com/temporalio/samples-typescript/blob/main/encryption/src/client.ts) ```ts const client = new Client({ connection, dataConverter: await getDataConverter(), }); const handle = await client.workflow.start(example, { args: ['Alice: Private message for Bob.'], taskQueue: 'encryption', workflowId: `my-business-id-${uuid()}`, }); console.log(`Started workflow ${handle.workflowId}`); console.log(await handle.result()); ``` [encryption/src/worker.ts](https://github.com/temporalio/samples-typescript/blob/main/encryption/src/worker.ts) ```ts const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), taskQueue: 'encryption', dataConverter: await getDataConverter(), }); ``` When the Client sends `'Alice: Private message for Bob.'` to the Workflow, it gets encrypted on the Client and decrypted in the Worker. The Workflow receives the decrypted message and appends another message. When it returns that longer string, the string gets encrypted by the Worker and decrypted by the Client. [encryption/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/encryption/src/workflows.ts) ```ts export async function example(message: string): Promise { return `${message}\nBob: Hi Alice, I'm Workflow Bob.`; } ``` --- ## Core application - TypeScript SDK The Foundations section of the Temporal Developer's guide covers the minimum set of concepts and implementation details needed to build and run a [Temporal Application](/temporal#temporal-application)—that is, all the relevant steps to start a [Workflow Execution](#develop-workflows) that executes an [Activity](#develop-activities). In this section you can find the following: - [Run a development Temporal Service](#run-a-development-server) - [Connect to a development Temporal Service](#connect-to-a-dev-cluster) - [Connect to Temporal Cloud](#connect-to-temporal-cloud) - [Develop a Workflow](#develop-workflows) - [Develop an Activity](#develop-activities) - [Start an Activity Execution](#activity-execution) - [Run a dev Worker](#run-a-dev-worker) - [Run a Worker on Docker](#run-a-worker-on-docker) - [Run a Temporal Cloud Worker](#run-a-dev-worker) - [Start a Workflow Execution](#start-workflow-execution) ## How to install the Temporal CLI and run a development server {#run-a-development-server} This section describes how to install the [Temporal CLI](/cli) and run a development Temporal Service. The local development Temporal Service comes packaged with the [Temporal Web UI](/web-ui). For information on deploying and running a self-hosted production Temporal Service, see the [Self-hosted guide](/self-hosted-guide), or sign up for [Temporal Cloud](/cloud) and let us run your production Temporal Service for you. Temporal CLI is a tool for interacting with a Temporal Service from the command line and it includes a distribution of the Temporal Server and Web UI. This local development Temporal Service runs as a single process with zero runtime dependencies and it supports persistence to disk and in-memory mode through SQLite. **Install the Temporal CLI** The Temporal CLI is available on macOS, Windows, and Linux. ### macOS **How to install the Temporal CLI on macOS** Choose one of the following install methods to install the Temporal CLI on macOS: **Install the Temporal CLI with Homebrew** ```bash brew install temporal ``` **Install the Temporal CLI from CDN** 1. Select the platform and architecture needed. - Download for Darwin amd64: https://temporal.download/cli/archive/latest?platform=darwin&arch=amd64 - Download for Darwin arm64: https://temporal.download/cli/archive/latest?platform=darwin&arch=arm64 2. Extract the downloaded archive. 3. Add the `temporal` binary to your PATH. ### Linux **How to install the Temporal CLI on Linux** Choose one of the following install methods to install the Temporal CLI on Linux: **Install the Temporal CLI with Homebrew** ```bash brew install temporal ``` **Install the Temporal CLI from CDN** 1. Select the platform and architecture needed. - Download for Linux amd64: https://temporal.download/cli/archive/latest?platform=linux&arch=amd64 - Download for Linux arm64: https://temporal.download/cli/archive/latest?platform=linux&arch=arm64 2. Extract the downloaded archive. 3. Add the `temporal` binary to your PATH. ### Windows **How to install the Temporal CLI on Windows** Follow these instructions to install the Temporal CLI on Windows: **Install the Temporal CLI from CDN** 1. Select the platform and architecture needed and download the binary. - Download for Windows amd64: https://temporal.download/cli/archive/latest?platform=windows&arch=amd64 - Download for Windows arm64: https://temporal.download/cli/archive/latest?platform=windows&arch=arm64 2. Extract the downloaded archive. 3. Add the `temporal.exe` binary to your PATH. ### Start the Temporal Development Server Start the Temporal Development Server by using the `server start-dev` command. ```bash temporal server start-dev ``` This command automatically starts the Web UI, creates the default [Namespace](/namespaces), and uses an in-memory database. The Temporal Server should be available on `localhost:7233` and the Temporal Web UI should be accessible at [`http://localhost:8233`](http://localhost:8233/). The server's startup configuration can be customized using command line options. For a full list of options, run: ```bash temporal server start-dev --help ``` ## How to install a Temporal SDK {#install-a-temporal-sdk} A [Temporal SDK](/encyclopedia/temporal-sdks) provides a framework for [Temporal Application](/temporal#temporal-application) development. An SDK provides you with the following: - A [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) to communicate with a [Temporal Service](/temporal-service). - APIs to develop [Workflows](/workflows). - APIs to create and manage [Worker Processes](/workers#worker). - APIs to author [Activities](/activity-definition). [![NPM](https://img.shields.io/npm/v/temporalio.svg?style=for-the-badge)](https://www.npmjs.com/search?q=author%3Atemporal-sdk-team) This project requires Node.js 18 or later. **Create a project** ```bash npx @temporalio/create@latest ./your-app ``` **Add to an existing project** ```bash npm install @temporalio/client @temporalio/worker @temporalio/workflow @temporalio/activity @temporalio/common ``` :::note The TypeScript SDK is designed with TypeScript-first developer experience in mind, but it works equally well with JavaScript. ::: ### How to find the TypeScript SDK API reference {#api-reference} The Temporal TypeScript SDK API reference is published to [typescript.temporal.io](https://typescript.temporal.io). ### Where are SDK-specific code examples? {#code-samples} You can find a complete list of executable code samples in [Temporal's GitHub repository](https://github.com/temporalio?q=samples-&type=all&language=&sort=). Additionally, several of the [Tutorials](https://learn.temporal.io) are backed by a fully executable template application. Use the [TypeScript samples library](https://github.com/temporalio/samples-typescript) stored on GitHub to demonstrate various capabilities of Temporal. **Where can I find video demos?** [Temporal TypeScript YouTube playlist](https://www.youtube.com/playlist?list=PLl9kRkvFJrlTavecydpk9r6cF7qBmQJvb). ### How to import an ECMAScript module {#ecmascript-modules} The JavaScript ecosystem is quickly moving toward publishing ECMAScript modules (ESM) instead of CommonJS modules. For example, `node-fetch@3` is ESM, but `node-fetch@2` is CommonJS. For more information about importing a pure ESM dependency, see our [Fetch ESM](https://github.com/temporalio/samples-typescript/tree/main/fetch-esm) sample for the necessary configuration changes: - `package.json` must have include the `"type": "module"` attribute. - `tsconfig.json` should output in `esnext` format. - Imports must include the `.js` file extension. ## Linting and types in TypeScript {#linting-and-types} If you started your project with `@temporalio/create`, you already have our recommended TypeScript and ESLint configurations. If you incrementally added Temporal to an existing app, we do recommend setting up linting and types because they help catch bugs well before you ship them to production, and they improve your development feedback loop. Take a look at our recommended [.eslintrc](https://github.com/temporalio/samples-typescript/blob/main/.shared/.eslintrc.js) file and tweak to suit your needs. ## How to connect a Temporal Client to a Temporal Service {#connect-to-a-dev-cluster} A [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) enables you to communicate with the [Temporal Service](/temporal-service). Communication with a Temporal Service includes, but isn't limited to, the following: - Starting Workflow Executions. - Sending Signals to Workflow Executions. - Sending Queries to Workflow Executions. - Getting the results of a Workflow Execution. - Providing an Activity Task Token. :::caution A Temporal Client cannot be initialized and used inside a Workflow. However, it is acceptable and common to use a Temporal Client inside an Activity to communicate with a Temporal Service. ::: When you are running a Temporal Service locally (such as the [Temporal CLI](https://docs.temporal.io/cli/server#start-dev)), the number of connection options you must provide is minimal. Many SDKs default to the local host or IP address and port that Temporalite and [Docker Compose](https://github.com/temporalio/docker-compose) serve (`127.0.0.1:7233`). Creating a [Connection](https://typescript.temporal.io/api/classes/client.Connection) connects to the Temporal Service, and you can pass the `Connection` instance when creating the [Client](https://typescript.temporal.io/api/classes/client.Client#connection). If you omit the `Connection` and just create a `new Client()`, it will connect to `localhost:7233`. ```ts async function run() { const client = new Client(); // . . . await client.connection.close(); } run().catch((err) => { console.error(err); process.exit(1); }); ``` ## How to connect to Temporal Cloud {#connect-to-temporal-cloud} When you connect to [Temporal Cloud](/cloud), you need to provide additional connection and client options that include the following: - The [Temporal Cloud Namespace Id](/cloud/namespaces#temporal-cloud-namespace-id). - The [Namespace's gRPC endpoint](/cloud/namespaces#temporal-cloud-grpc-endpoint). An endpoint listing is available at the [Temporal Cloud Website](https://cloud.temporal.io/namespaces) on each Namespace detail page. The endpoint contains the Namespace Id and port. - mTLS CA certificate. - mTLS private key. For more information about managing and generating client certificates for Temporal Cloud, see [How to manage certificates in Temporal Cloud](/cloud/certificates). For more information about configuring TLS to secure inter- and intra-network communication for a Temporal Service, see [Temporal Customization Samples](https://github.com/temporalio/samples-server). Create a [`Connection`](https://typescript.temporal.io/api/classes/client.Connection) with a [`connectionOptions`](https://typescript.temporal.io/api/interfaces/client.ConnectionOptions) object that has your Cloud namespace and client certificate. ```ts const { NODE_ENV = 'development' } = process.env; const isDeployed = ['production', 'staging'].includes(NODE_ENV); async function run() { const cert = await fs.readFile('./path-to/your.pem'); const key = await fs.readFile('./path-to/your.key'); let connectionOptions = {}; if (isDeployed) { connectionOptions = { address: 'your-namespace.tmprl.cloud:7233', tls: { clientCertPair: { crt: cert, key, }, }, }; const connection = await Connection.connect(connectionOptions); const client = new Client({ connection, namespace: 'your-namespace', }); // . . . await client.connection.close(); } } run().catch((err) => { console.error(err); process.exit(1); }); ``` ## How to develop a basic Workflow {#develop-workflows} Workflows are the fundamental unit of a Temporal Application, and it all starts with the development of a [Workflow Definition](/workflow-definition). In the Temporal TypeScript SDK programming model, Workflow Definitions are _just functions_, which can store state and orchestrate Activity Functions. The following code snippet uses `proxyActivities` to schedule a `greet` Activity in the system to say hello. A Workflow Definition can have multiple parameters; however, we recommend using a single object parameter. ```typescript type ExampleArgs = { name: string; }; export async function example( args: ExampleArgs, ): Promise<{ greeting: string }> { const greeting = await greet(args.name); return { greeting }; } ``` ### How to define Workflow parameters {#workflow-parameters} Temporal Workflows may have any number of custom parameters. However, we strongly recommend that objects are used as parameters, so that the object's individual fields may be altered without breaking the signature of the Workflow. All Workflow Definition parameters must be serializable. You can define and pass parameters in your Workflow. In this example, you define your arguments in your `client.ts` file and pass those parameters to `workflow.ts` through your Workflow function. Start a Workflow with the parameters that are in the `client.ts` file. In this example we set the `name` parameter to `Temporal` and `born` to `2019`. Then set the Task Queue and Workflow Id. `client.ts` ```typescript ... await client.workflow.start(example, { args: [{ name: 'Temporal', born: 2019 }], taskQueue: 'your-queue', workflowId: 'business-meaningful-id', }); ``` In `workflows.ts` define the type of the parameter that the Workflow function takes in. The interface `ExampleParam` is a name we can now use to describe the requirement in the previous example. It still represents having the two properties called `name` and `born` that is of the type `string`. Then define a function that takes in a parameter of the type `ExampleParam` and return a `Promise`. The `Promise` object represents the eventual completion, or failure, of `await client.workflow.start()` and its resulting value. ```ts interface ExampleParam { name: string; born: number; } export async function example({ name, born }: ExampleParam): Promise { return `Hello ${name}, you were born in ${born}.`; } ``` ### How to define Workflow return parameters {#workflow-return-values} Workflow return values must also be serializable. Returning results, returning errors, or throwing exceptions is fairly idiomatic in each language that is supported. However, Temporal APIs that must be used to get the result of a Workflow Execution will only ever receive one of either the result or the error. To return a value of the Workflow function, use `Promise`. The `Promise` is used to make asynchronous calls and comes with guarantees. The following example uses a `Promise` to eventually return a `name` and `born` parameter. ```typescript interface ExampleParam { name: string; born: number; } export async function example({ name, born }: ExampleParam): Promise { return `Hello ${name}, you were born in ${born}.`; } ``` ### How to customize your Workflow Type {#workflow-type} Workflows have a Type that are referred to as the Workflow name. The following examples demonstrate how to set a custom name for your Workflow Type. In TypeScript, the Workflow Type is the Workflow function name and there isn't a mechanism to customize the Workflow Type. In the following example, the Workflow Type is the name of the function, `helloWorld`. [snippets/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/snippets/src/workflows.ts) ```ts export async function helloWorld(): Promise { return '👋 Hello World!'; } ``` ### How to develop Workflow logic {#workflow-logic-requirements} Workflow logic is constrained by [deterministic execution requirements](/workflow-definition#deterministic-constraints). Therefore, each language is limited to the use of certain idiomatic techniques. However, each Temporal SDK provides a set of APIs that can be used inside your Workflow to interact with external (to the Workflow) application code. In the Temporal TypeScript SDK, Workflows run in a deterministic sandboxed environment. The code is bundled on Worker creation using Webpack, and can import any package as long as it does not reference Node.js or DOM APIs. :::note If you **must** use a library that references a Node.js or DOM API and you are certain that those APIs are not used at runtime, add that module to the [ignoreModules](https://typescript.temporal.io/api/interfaces/worker.BundleOptions#ignoremodules) list. ::: The Workflow sandbox can run only deterministic code, so side effects and access to external state must be done through Activities because Activity outputs are recorded in the Event History and can read deterministically by the Workflow. This limitation also means that Workflow code cannot directly import the [Activity Definition](/activity-definition). [Activity Types](/activity-definition#activity-type) can be imported, so they can be invoked in a type-safe manner. To make the Workflow runtime deterministic, functions like `Math.random()`, `Date`, and `setTimeout()` are replaced by deterministic versions. [FinalizationRegistry](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/FinalizationRegistry) and [WeakRef](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakRef) are removed because v8's garbage collector is not deterministic.
Expand to see the implications of the deterministic Date API ```typescript // this prints the *exact* same timestamp repeatedly for (let x = 0; x < 10; ++x) { console.log(Date.now()); } // this prints timestamps increasing roughly 1s each iteration for (let x = 0; x < 10; ++x) { await sleep('1 second'); console.log(Date.now()); } ```
## How to develop a basic Activity {#develop-activities} One of the primary things that Workflows do is orchestrate the execution of Activities. An Activity is a normal function or method execution that's intended to execute a single, well-defined action (either short or long-running), such as querying a database, calling a third-party API, or transcoding a media file. An Activity can interact with world outside the Temporal Platform or use a Temporal Client to interact with a Temporal Service. For the Workflow to be able to execute the Activity, we must define the [Activity Definition](/activity-definition). - Activities execute in the standard Node.js environment. - Activities cannot be in the same file as Workflows and must be separately registered. - Activities may be retried repeatedly, so you may need to use idempotency keys for critical side effects. Activities are _just functions_. The following is an Activity that accepts a string parameter and returns a string. [snippets/src/activities.ts](https://github.com/temporalio/samples-typescript/blob/main/snippets/src/activities.ts) ```ts export async function greet(name: string): Promise { return `👋 Hello, ${name}!`; } ``` ### How to develop Activity Parameters {#activity-parameters} There is no explicit limit to the total number of parameters that an [Activity Definition](/activity-definition) may support. However, there is a limit to the total size of the data that ends up encoded into a gRPC message Payload. A single argument is limited to a maximum size of 2 MB. And the total size of a gRPC message, which includes all the arguments, is limited to a maximum of 4 MB. Also, keep in mind that all Payload data is recorded in the [Workflow Execution Event History](/workflow-execution/event#event-history) and large Event Histories can affect Worker performance. This is because the entire Event History could be transferred to a Worker Process with a [Workflow Task](/tasks#workflow-task). {/* TODO link to gRPC limit section when available */} Some SDKs require that you pass context objects, others do not. When it comes to your application data—that is, data that is serialized and encoded into a Payload—we recommend that you use a single object as an argument that wraps the application data passed to Activities. This is so that you can change what data is passed to the Activity without breaking a function or method signature. This Activity takes a single `name` parameter of type `string`. [snippets/src/activities.ts](https://github.com/temporalio/samples-typescript/blob/main/snippets/src/activities.ts) ```ts export async function greet(name: string): Promise { return `👋 Hello, ${name}!`; } ``` ### How to define Activity return values {#activity-return-values} All data returned from an Activity must be serializable. Activity return values are subject to payload size limits in Temporal. The default payload size limit is 2MB, and there is a hard limit of 4MB for any gRPC message size in the Event History transaction ([see Cloud limits here](https://docs.temporal.io/cloud/limits#per-message-grpc-limit)). Keep in mind that all return values are recorded in a [Workflow Execution Event History](/workflow-execution/event#event-history). In TypeScript, the return value is always a Promise. In the following example, `Promise` is the return value. ```typescript export async function greet(name: string): Promise { return `👋 Hello, ${name}!`; } ``` ### How to customize your Activity Type {#activity-type} Activities have a Type that are referred to as the Activity name. The following examples demonstrate how to set a custom name for your Activity Type. You can customize the name of the Activity when you register it with the Worker. In the following example, the Activity Name is `activityFoo`. [snippets/src/worker-activity-type-custom.ts](https://github.com/temporalio/samples-typescript/blob/main/snippets/src/worker-activity-type-custom.ts) ```ts async function run() { const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), taskQueue: 'snippets', activities: { activityFoo: greet, }, }); await worker.run(); } ``` ### Important design patterns for Activities {#activity-design-patterns} The following are some important (and frequently requested) patterns for using our Activities APIs. These patterns address common needs and use cases. #### Share dependencies in Activity functions (dependency injection) Because Activities are "just functions," you can also create functions that create Activities. This is a helpful pattern for using closures to do the following: - Store expensive dependencies for sharing, such as database connections. - Inject secret keys (such as environment variables) from the Worker to the Activity. [activities-dependency-injection/src/activities.ts](https://github.com/temporalio/samples-typescript/blob/main/activities-dependency-injection/src/activities.ts) ```ts export interface DB { get(key: string): Promise; } export const createActivities = (db: DB) => ({ async greet(msg: string): Promise { const name = await db.get('name'); // simulate read from db return `${msg}: ${name}`; }, async greet_es(mensaje: string): Promise { const name = await db.get('name'); // simulate read from db return `${mensaje}: ${name}`; }, }); ```
See full example When you register these in the Worker, pass your shared dependencies accordingly: ```ts async function run() { // Mock DB connection initialization in Worker const db = { async get(_key: string) { return 'Temporal'; }, }; const worker = await Worker.create({ taskQueue: 'dependency-injection', workflowsPath: require.resolve('./workflows'), activities: createActivities(db), }); await worker.run(); } run().catch((err) => { console.error(err); process.exit(1); }); ``` Because Activities are always referenced by name, inside the Workflow they can be proxied as normal, although the types need some adjustment: [activities-dependency-injection/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/activities-dependency-injection/src/workflows.ts) ```ts // Note usage of ReturnType<> generic since createActivities is a factory function const { greet, greet_es } = proxyActivities>({ startToCloseTimeout: '30 seconds', }); ```
#### Import multiple Activities simultaneously You can proxy multiple Activities from the same `proxyActivities` call if you want them to share the same timeouts, retries, and options: ```ts export async function Workflow(name: string): Promise { // destructuring multiple activities with the same options const { act1, act2, act3 } = proxyActivities(); /* activityOptions */ await act1(); await Promise.all([act2, act3]); } ``` #### Dynamically reference Activities Because Activities are referenced only by their string names, you can reference them dynamically if needed: ```js export async function DynamicWorkflow(activityName, ...args) { const acts = proxyActivities(/* activityOptions */); // these are equivalent await acts.activity1(); await acts['activity1'](); // dynamic reference to activities using activityName let result = await acts[activityName](...args); } ``` Type safety is still supported here, but we encourage you to validate and handle mismatches in Activity names. An invalid Activity name leads to a `NotFoundError` with a message that looks like this: ``` ApplicationFailure: Activity function actC is not registered on this Worker, available activities: ["actA", "actB"] ``` ## How to start an Activity Execution {#activity-execution} Calls to spawn [Activity Executions](/activity-execution) are written within a [Workflow Definition](/workflow-definition). In TypeScript, you never call an Activity function directly. Instead, you pass in the _types_ of your Activities and Activity options to the `proxyActivities` function. This will give you an _Activity Handle_, a type-safe proxy object with the same function names and signatures as your real activities. From the Activity Handle, you can call your Activities as if they were normal async functions. ```typescript // Only import the activity types, not the functions themselves // Retrieve the Activity Handle by passing in the Activity types and options const activityHandle = proxyActivities({ startToCloseTimeout: '1 minute', }); // Deconstruct the individual Activity functions from the Activity Handle const { greet } = activityHandle; // A workflow that calls an activity export async function example(name: string): Promise { return await greet(name); } ``` When you call a proxied function, the Workflow does not execute the Activity code directly. Instead, it schedules an Activity Task. After the Activity Task is scheduled, it becomes available for a Worker to pick up and execute. This results in the set of three [Activity Task](/tasks#activity-task) related Events: [ActivityTaskScheduled](/references/events#activitytaskscheduled), [ActivityTaskStarted](/references/events#activitytaskstarted), and [ActivityTaskCompleted](/references/events#activitytaskcompleted) in your Workflow Execution Event History. The Worker may run many Activity executions at the same time, all using the same Activity function code. Temporal can also retry an Activity if it fails or times out. For this reason, you should write Activities to be [idempotent](../../encyclopedia/activities/activity-definition.mdx#idempotency): calling them multiple times with the same input should have the same effect as calling them once. :::tip Every Activity call you make is recorded in the Workflow’s execution history, including the parameters you pass in and the value that comes back. This history is what allows Temporal to recover a Workflow after a failure. Because the entire history must be stored and replayed, avoid passing large objects as Activity inputs or return values. Keeping payloads small will help your Workflows replay and recover efficiently. ::: ### How to set the required Activity Timeouts {#required-timeout} Activity Execution semantics rely on several parameters. The only required value that needs to be set is either a [Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout) or a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout). These values are set in the Activity Options. ### How to get the results of an Activity Execution {#get-activity-results} The call to spawn an [Activity Execution](/activity-execution) generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command and provides the Workflow with an Awaitable. Workflow Executions can either block progress until the result is available through the Awaitable or continue progressing, making use of the result when it becomes available. Since Activities are referenced by their string name, you can reference them dynamically to get the result of an Activity Execution. ```typescript export async function DynamicWorkflow(activityName, ...args) { const acts = proxyActivities(/* activityOptions */); // these are equivalent await acts.activity1(); await acts['activity1'](); let result = await acts[activityName](...args); return result; } ``` The `proxyActivities()` returns an object that calls the Activities in the function. `acts[activityName]()` references the Activity using the Activity name, then it returns the results. ## How to run Worker Processes {#run-a-dev-worker} The [Worker Process](/workers#worker-process) is where Workflow Functions and Activity Functions are executed. - Each [Worker Entity](/workers#worker-entity) in the Worker Process must register the exact Workflow Types and Activity Types it may execute. - Each Worker Entity must also associate itself with exactly one [Task Queue](/task-queue). - Each Worker Entity polling the same Task Queue must be registered with the same Workflow Types and Activity Types. A [Worker Entity](/workers#worker-entity) is the component within a Worker Process that listens to a specific Task Queue. Although multiple Worker Entities can be in a single Worker Process, a single Worker Entity Worker Process may be perfectly sufficient. For more information, see the [Worker tuning guide](/develop/worker-performance). A Worker Entity contains a Workflow Worker and/or an Activity Worker, which makes progress on Workflow Executions and Activity Executions, respectively. ## How to run a Worker on Docker in TypeScript {#run-a-worker-on-docker} :::note To improve worker startup time, we recommend preparing workflow bundles ahead-of-time. See our [productionsample](https://github.com/temporalio/samples-typescript/tree/main/production) for details. ::: Workers based on the TypeScript SDK can be deployed and run as Docker containers. We recommend an LTS Node.js release such as 18, 20, 22, or 24. Both `amd64` and `arm64` architectures are supported. A glibc-based image is required; musl-based images are _not_ supported (see below). The easiest way to deploy a TypeScript SDK Worker on Docker is to start with the `node:20-bullseye` image. For example: ```dockerfile FROM node:20-bullseye --- # For better cache utilization, copy package.json and lock file first and install the dependencies before copying the --- # rest of the application and building. COPY . /app WORKDIR /app --- # Alternatively, run npm ci, which installs only dependencies specified in the lock file and is generally faster. RUN npm install --only=production \ && npm run build CMD ["npm", "start"] ``` For smaller images and/or more secure deployments, it is also possible to use `-slim` Docker image variants (like `node:20-bullseye-slim`) or `distroless/nodejs` Docker images (like `gcr.io/distroless/nodejs20-debian11`) with the following caveats. ### Using `node:slim` images `node:slim` images do not contain some of the common packages found in regular images. This results in significantly smaller images. However, TypeScript SDK requires the presence of root TLS certificates (the `ca-certificates` package), which are not included in `slim` images. The `ca-certificates` package is required even when connecting to a local Temporal Server or when using a server connection config that doesn't explicitly use TLS. For this reason, the `ca-certificates` package must be installed during the construction of the Docker image. For example: ```dockerfile FROM node:20-bullseye-slim RUN apt-get update \ && apt-get install -y ca-certificates \ && rm -rf /var/lib/apt/lists/* --- # ... same as with regular image ``` Failure to install this dependency results in a `[TransportError: transport error]` runtime error, because the certificates cannot be verified. ### Using `distroless/nodejs` images `distroless/nodejs` images include only the files that are strictly required to execute `node`. This results in even smaller images (approximately half the size of `node:slim` images). It also significantly reduces the surface of potential security issues that could be exploited by a hacker in the resulting Docker images. It is generally possible and safe to execute TypeScript SDK Workers using `distroless/nodejs` images (unless your code itself requires dependencies that are not included in `distroless/nodejs`). However, some tools required for the build process (notably the `npm` command) are _not_ included in the `distroless/nodejs` image. This might result in various error messages during the Docker build. The recommended solution is to use a multi-step Dockerfile. For example: ```dockerfile --- # -- BUILD STEP -- FROM node:20-bullseye AS builder COPY . /app WORKDIR /app RUN npm install --only=production \ && npm run build --- # -- RESULTING IMAGE -- FROM gcr.io/distroless/nodejs20-debian11 COPY --from=builder /app /app WORKDIR /app CMD ["node", "build/worker.js"] ``` ### Properly configure Node.js memory in Docker By default, `node` configures its maximum old-gen memory to 25% of the _physical memory_ of the machine on which it is executing, with a maximum of 4 GB. This is likely inappropriate when running Node.js in a Docker environment and can result in either underusage of available memory (`node` only uses a fraction of the memory allocated to the container) or overusage (`node` tries to use more memory than what is allocated to the container, which will eventually lead to the process being killed by the operating system). Therefore we recommended that you always explicitly set the `--max-old-space-size` `node` argument to approximately 80% of the maximum size (in megabytes) that you want to allocate the `node` process. You might need some experimentation and adjustment to find the most appropriate value based on your specific application. In practice, it is generally easier to provide this argument through the [`NODE_OPTIONS` environment variable](https://nodejs.org/api/cli.html#node_optionsoptions). ### Do not use Alpine Alpine replaces glibc with musl, which is incompatible with the Rust core of the TypeScript SDK. If you receive errors like the following, it's probably because you are using Alpine. ```sh Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /opt/app/node_modules/@temporalio/core-bridge/index.node) ``` Or like this: ```sh Error: Error relocating /opt/app/node_modules/@temporalio/core-bridge/index.node: __register_atfork: symbol not found ``` ## How to run a Temporal Cloud Worker {#run-a-temporal-cloud-worker} To run a Worker that uses [Temporal Cloud](/cloud), you need to provide additional connection and client options that include the following: - An address that includes your [Cloud Namespace Name](/namespaces) and a port number: `..tmprl.cloud:`. - mTLS CA certificate. - mTLS private key. For more information about managing and generating client certificates for Temporal Cloud, see [How to manage certificates in Temporal Cloud](/cloud/certificates). For more information about configuring TLS to secure inter- and intra-network communication for a Temporal Service, see [Temporal Customization Samples](https://github.com/temporalio/samples-server). ### How to register types {#register-types} All Workers listening to the same Task Queue name must be registered to handle the exact same Workflows Types and Activity Types. If a Worker polls a Task for a Workflow Type or Activity Type it does not know about, it fails that Task. However, the failure of the Task does not cause the associated Workflow Execution to fail. In development, use [`workflowsPath`](https://typescript.temporal.io/api/interfaces/worker.WorkerOptions/#workflowspath): [snippets/src/worker.ts](https://github.com/temporalio/samples-typescript/blob/main/snippets/src/worker.ts) ```ts async function run() { const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), taskQueue: 'snippets', activities, }); await worker.run(); } ``` In this snippet, the Worker bundles the Workflow code at runtime. In production, you can improve your Worker's startup time by bundling in advance: as part of your production build, call `bundleWorkflowCode`: [production/src/scripts/build-workflow-bundle.ts](https://github.com/temporalio/samples-typescript/blob/main/production/src/scripts/build-workflow-bundle.ts) ```ts async function bundle() { const { code } = await bundleWorkflowCode({ workflowsPath: require.resolve('../workflows'), }); const codePath = path.join(__dirname, '../../workflow-bundle.js'); await writeFile(codePath, code); console.log(`Bundle written to ${codePath}`); } ``` Then the bundle can be passed to the Worker: [production/src/worker.ts](https://github.com/temporalio/samples-typescript/blob/main/production/src/worker.ts) ```ts const workflowOption = () => process.env.NODE_ENV === 'production' ? { workflowBundle: { codePath: require.resolve('../workflow-bundle.js'), }, } : { workflowsPath: require.resolve('./workflows') }; async function run() { const worker = await Worker.create({ ...workflowOption(), activities, taskQueue: 'production-sample', }); await worker.run(); } ``` ## How to shut down a Worker and track its state {#shut-down-a-worker} Workers shut down if they receive any of the Signals enumerated in [shutdownSignals](https://typescript.temporal.io/api/interfaces/worker.RuntimeOptions#shutdownsignals): `'SIGINT'`, `'SIGTERM'`, `'SIGQUIT'`, and `'SIGUSR2'`. In development, we shut down Workers with `Ctrl+C` (`SIGINT`) or [nodemon](https://github.com/temporalio/samples-typescript/blob/c37bae3ea235d1b6956fcbe805478aa46af973ce/hello-world/package.json#L10) (`SIGUSR2`). In production, you usually want to give Workers time to finish any in-progress Activities by setting [shutdownGraceTime](https://typescript.temporal.io/api/interfaces/worker.WorkerOptions#shutdowngracetime). As soon as a Worker receives a shutdown Signal or request, the Worker stops polling for new Tasks and allows in-flight Tasks to complete until `shutdownGraceTime` is reached. Any Activities that are still running at that time will stop running and will be rescheduled by Temporal Server when an Activity timeout occurs. If you must guarantee that the Worker eventually shuts down, you can set [shutdownForceTime](https://typescript.temporal.io/api/interfaces/worker.WorkerOptions#shutdownforcetime). You might want to programmatically shut down Workers (with [Worker.shutdown()](https://typescript.temporal.io/api/classes/worker.Worker#shutdown)) in integration tests or when automating a fleet of Workers. ### Worker states At any time, you can Query Worker state with [Worker.getState()](https://typescript.temporal.io/api/classes/worker.Worker#getstate). A Worker is always in one of seven states: - `INITIALIZED`: The initial state of the Worker after calling [Worker.create()](https://typescript.temporal.io/api/classes/worker.Worker#create) and successfully connecting to the server. - `RUNNING`: [Worker.run()](https://typescript.temporal.io/api/classes/worker.Worker#run) was called and the Worker is polling Task Queues. - `FAILED`: The Worker encountered an unrecoverable error; `Worker.run()` should reject with the error. - The last four states are related to the Worker shutdown process: - `STOPPING`: The Worker received a shutdown Signal or `Worker.shutdown()` was called. The Worker will forcefully shut down after `shutdownGraceTime` expires. - `DRAINING`: All Workflow Tasks have been drained; waiting for Activities and cached Workflows eviction. - `DRAINED`: All Activities and Workflows have completed; ready to shut down. - `STOPPED`: Shutdown complete; `worker.run()` resolves. If you need more visibility into internal Worker state, see the [Worker class](https://typescript.temporal.io/api/classes/worker.Worker) in the API reference. ## How to start a Workflow Execution {#start-workflow-execution} [Workflow Execution](/workflow-execution) semantics rely on several parameters—that is, to start a Workflow Execution you must supply a Task Queue that will be used for the Tasks (one that a Worker is polling), the Workflow Type, language-specific contextual data, and Workflow Function parameters. In the examples below, all Workflow Executions are started using a Temporal Client. To spawn Workflow Executions from within another Workflow Execution, use either the [Child Workflow](/develop/typescript/child-workflows) or External Workflow APIs. See the [Customize Workflow Type](#workflow-type) section to see how to customize the name of the Workflow Type. A request to spawn a Workflow Execution causes the Temporal Service to create the first Event ([WorkflowExecutionStarted](/references/events#workflowexecutionstarted)) in the Workflow Execution Event History. The Temporal Service then creates the first Workflow Task, resulting in the first [WorkflowTaskScheduled](/references/events#workflowtaskscheduled) Event. When you have a Client, you can schedule the start of a Workflow with `client.workflow.start()`, specifying `workflowId`, `taskQueue`, and `args` and returning a Workflow handle immediately after the Server acknowledges the receipt. ```typescript const handle = await client.workflow.start(example, { workflowId: 'your-workflow-id', taskQueue: 'your-task-queue', args: ['argument01', 'argument02', 'argument03'], // this is typechecked against workflowFn's args }); const handle = client.getHandle(workflowId); const result = await handle.result(); ``` Calling `client.workflow.start()` and `client.workflow.execute()` send a command to Temporal Server to schedule a new Workflow Execution on the specified Task Queue. It does not actually start until a Worker that has a matching Workflow Type, polling that Task Queue, picks it up. You can test this by executing a Client command without a matching Worker. Temporal Server records the command in Event History, but does not make progress with the Workflow Execution until a Worker starts polling with a matching Task Queue and Workflow Definition. Workflow Execution run in a separate V8 isolate context in order to provide a [deterministic runtime](/workflow-definition#deterministic-constraints). ### How to set a Workflow's Task Queue {#set-task-queue} In most SDKs, the only Workflow Option that must be set is the name of the [Task Queue](/task-queue). For any code to execute, a Worker Process must be running that contains a Worker Entity that is polling the same Task Queue name. A Task Queue is a dynamic queue in Temporal polled by one or more Workers. Workers bundle Workflow code and node modules using Webpack v5 and execute them inside V8 isolates. Activities are directly required and run by Workers in the Node.js environment. Workers are flexible. You can host any or all of your Workflows and Activities on a Worker, and you can host multiple Workers on a single machine. The Worker need three main things: - `taskQueue`: The Task Queue to poll. This is the only required argument. - `activities`: Optional. Imported and supplied directly to the Worker. - Workflow bundle. Choose one of the following options: - Specify `workflowsPath` pointing to your `workflows.ts` file to pass to Webpack; for example, `require.resolve('./workflows')`. Workflows are bundled with their dependencies. - If you prefer to handle the bundling yourself, pass a prebuilt bundle to `workflowBundle`. ```ts async function run() { // Step 1: Register Workflows and Activities with the Worker and connect to // the Temporal server. const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), activities, taskQueue: 'hello-world', }); // Worker connects to localhost by default and uses console.error for logging. // Customize the Worker by passing more options to create(): // https://typescript.temporal.io/api/classes/worker.Worker // If you need to configure server connection parameters, see docs: // /typescript/security#encryption-in-transit-with-mtls // Step 2: Start accepting tasks on the `tutorial` queue await worker.run(); } run().catch((err) => { console.error(err); process.exit(1); }); ``` `taskQueue` is the only required option; however, use `workflowsPath` and `activities` to register Workflows and Activities with the Worker. When scheduling a Workflow, you must specify `taskQueue`. ```ts // This is the code that is used to start a Workflow. const connection = await Connection.create(); const client = new Client({ connection }); const result = await client.workflow.execute(yourWorkflow, { // required taskQueue: 'your-task-queue', // required workflowId: 'your-workflow-id', }); ``` When creating a Worker, you must pass the `taskQueue` option to the `Worker.create()` function. ```ts const worker = await Worker.create({ // imported elsewhere activities, taskQueue: 'your-task-queue', }); ``` Optionally, in Workflow code, when calling an Activity, you can specify the Task Queue by passing the `taskQueue` option to `proxyActivities()`, `startChild()`, or `executeChild()`. If you do not specify `taskQueue`, the TypeScript SDK places Activity and Child Workflow Tasks in the same Task Queue as the Workflow Task Queue. ### How to set a Workflow Id {#workflow-id} Although it is not required, we recommend providing your own [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)that maps to a business process or business entity identifier, such as an order identifier or customer identifier. Connect to a Client with `client.workflow.start()` and any arguments. Then specify your `taskQueue` and set your `workflowId` to a meaningful business identifier. ```typescript const handle = await client.workflow.start(example, { workflowId: 'yourWorkflowId', taskQueue: 'yourTaskQueue', args: ['your', 'arg', 'uments'], }); ``` This starts a new Client with the given Workflow Id, Task Queue name, and an argument. ### How to get the results of a Workflow Execution {#get-workflow-results} If the call to start a Workflow Execution is successful, you will gain access to the Workflow Execution's Run Id. The Workflow Id, Run Id, and Namespace may be used to uniquely identify a Workflow Execution in the system and get its result. It's possible to both block progress on the result (synchronous execution) or get the result at some other point in time (asynchronous execution). In the Temporal Platform, it's also acceptable to use Queries as the preferred method for accessing the state and results of Workflow Executions. To return the results of a Workflow Execution: ```typescript return ( 'Completed ' + wf.workflowInfo().workflowId + ', Total Charged: ' + totalCharged ); ``` `totalCharged` is just a function declared in your code. For a full example, see [subscription-workflow-project-template-typescript/src/workflows.ts](https://github.com/temporalio/subscription-workflow-project-template-typescript/blob/main/src/workflows.ts). A Workflow function may return a result. If it doesn't (in which case the return type is `Promise`), the result will be `undefined`. If you started a Workflow with `client.workflow.start()`, you can choose to wait for the result anytime with `handle.result()`. ```typescript const handle = client.getHandle(workflowId); const result = await handle.result(); ``` Using a Workflow Handle isn't necessary with `client.workflow.execute()`. Workflows that prematurely end will throw a `WorkflowFailedError` if you call `result()`. If you call `result()` on a Workflow that prematurely ended for some reason, it throws a [`WorkflowFailedError` error](https://typescript.temporal.io/api/classes/client.WorkflowFailedError/) that reflects the reason. For that reason, it is recommended to catch that error. ```typescript const handle = client.getHandle(workflowId); try { const result = await handle.result(); } catch (err) { if (err instanceof WorkflowFailedError) { throw new Error('Temporal workflow failed: ' + workflowId, { cause: err, }); } else { throw new Error('error from Temporal workflow ' + workflowId, { cause: err, }); } } ``` ## Cancellation scopes in Typescript {#cancellation-scopes} In the TypeScript SDK, Workflows are represented internally by a tree of cancellation scopes, each with cancellation behaviors you can specify. By default, everything runs in the "root" scope. Scopes are created using the [CancellationScope](https://typescript.temporal.io/api/classes/workflow.CancellationScope) constructor or one of three static helpers: - [cancellable(fn)](https://typescript.temporal.io/api/classes/workflow.CancellationScope#cancellable-1): Children are automatically cancelled when their containing scope is cancelled. - Equivalent to `new CancellationScope().run(fn)`. - [nonCancellable(fn)](https://typescript.temporal.io/api/classes/workflow.CancellationScope#noncancellable): Cancellation does not propagate to children. - Equivalent to `new CancellationScope({ cancellable: false }).run(fn)`. - [withTimeout(timeoutMs, fn)](https://typescript.temporal.io/api/classes/workflow.CancellationScope#withtimeout): If a timeout triggers before `fn` resolves, the scope is cancelled, triggering cancellation of any enclosed operations, such as Activities and Timers. - Equivalent to `new CancellationScope({ cancellable: true, timeout: timeoutMs }).run(fn)`. Cancellations are applied to cancellation scopes, which can encompass an entire Workflow or just part of one. Scopes can be nested, and cancellation propagates from outer scopes to inner ones. A Workflow's `main` function runs in the outermost scope. Cancellations are handled by catching `CancelledFailure`s thrown by cancelable operations. `CancellationScope.run()` and the static helpers mentioned earlier return native JavaScript promises, so you can use the familiar Promise APIs like `Promise.all` and `Promise.race` to model your asynchronous logic. You can also use the following APIs: - `CancellationScope.current()`: Get the current scope. - `scope.cancel()`: Cancel all operations inside a `scope`. - `scope.run(fn)`: Run an async function within a `scope` and return the result of `fn`. - `scope.cancelRequested`: A promise that resolves when a scope cancellation is requested, such as when Workflow code calls `cancel()` or the entire Workflow is cancelled by an external client. When a `CancellationScope` is cancelled, it propagates cancellation in any child scopes and of any cancelable operations created within it, such as the following: - Activities - Timers (created with the [sleep](https://typescript.temporal.io/api/namespaces/workflow#sleep) function) - [Triggers](https://typescript.temporal.io/api/classes/workflow.Trigger) ### CancelledFailure Timers and triggers throw [CancelledFailure](https://typescript.temporal.io/api/classes/common.CancelledFailure) when cancelled; Activities and Child Workflows throw `ActivityFailure` and `ChildWorkflowFailure` with cause set to `CancelledFailure`. One exception is when an Activity or Child Workflow is scheduled in an already cancelled scope (or Workflow). In this case, they propagate the `CancelledFailure` that was thrown to cancel the scope. To simplify checking for cancellation, use the [isCancellation(err)](https://typescript.temporal.io/api/namespaces/workflow#iscancellation) function. ### Internal cancellation example [packages/test/src/workflows/cancel-timer-immediately.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/cancel-timer-immediately.ts) ```ts export async function cancelTimer(): Promise { // Timers and Activities are automatically cancelled when their containing scope is cancelled. try { await CancellationScope.cancellable(async () => { const promise = sleep(1); // <-- Will be cancelled because it is attached to this closure's scope CancellationScope.current().cancel(); await promise; // <-- Promise must be awaited in order for `cancellable` to throw }); } catch (e) { if (e instanceof CancelledFailure) { console.log('Timer cancelled 👍'); } else { throw e; // <-- Fail the workflow } } } ``` Alternatively, the preceding can be written as the following. [packages/test/src/workflows/cancel-timer-immediately-alternative-impl.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/cancel-timer-immediately-alternative-impl.ts) ```ts export async function cancelTimerAltImpl(): Promise { try { const scope = new CancellationScope(); const promise = scope.run(() => sleep(1)); scope.cancel(); // <-- Cancel the timer created in scope await promise; // <-- Throws CancelledFailure } catch (e) { if (e instanceof CancelledFailure) { console.log('Timer cancelled 👍'); } else { throw e; // <-- Fail the workflow } } } ``` ### External cancellation example The following code shows how to handle Workflow cancellation by an external client while an Activity is running. {/* TODO: add a sample here of how this Workflow could be cancelled using a WorkflowHandle */} [packages/test/src/workflows/handle-external-workflow-cancellation-while-activity-running.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/handle-external-workflow-cancellation-while-activity-running.ts) ```ts const { httpPostJSON, cleanup } = proxyActivities({ startToCloseTimeout: '10m', }); export async function handleExternalWorkflowCancellationWhileActivityRunning(url: string, data: any): Promise { try { await httpPostJSON(url, data); } catch (err) { if (isCancellation(err)) { console.log('Workflow cancelled'); // Cleanup logic must be in a nonCancellable scope // If we'd run cleanup outside of a nonCancellable scope it would've been cancelled // before being started because the Workflow's root scope is cancelled. await CancellationScope.nonCancellable(() => cleanup(url)); } throw err; // <-- Fail the Workflow } } ``` ### nonCancellable example `CancellationScope.nonCancellable` prevents cancellation from propagating to children. [activities-cancellation-heartbeating/src/cancellation-scopes.ts](https://github.com/temporalio/samples-typescript/blob/main/activities-cancellation-heartbeating/src/cancellation-scopes.ts) ```ts export async function nonCancellable(url: string): Promise { // Prevent Activity from being cancelled and await completion. // Note that the Workflow is completely oblivious and impervious to cancellation in this example. return CancellationScope.nonCancellable(() => httpGetJSON(url)); } ``` ### withTimeout example A common operation is to cancel one or more Activities if a deadline elapses. `withTimeout` creates a `CancellationScope` that is automatically cancelled after a timeout. [packages/test/src/workflows/multiple-activities-single-timeout.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/multiple-activities-single-timeout.ts) ```ts export function multipleActivitiesSingleTimeout(urls: string[], timeoutMs: number): Promise { const { httpGetJSON } = proxyActivities({ startToCloseTimeout: timeoutMs, }); // If timeout triggers before all activities complete // the Workflow will fail with a CancelledError. return CancellationScope.withTimeout(timeoutMs, () => Promise.all(urls.map((url) => httpGetJSON(url)))); } ``` ### scope.cancelRequested You can await `cancelRequested` to make a Workflow aware of cancellation while waiting on `nonCancellable` scopes. [packages/test/src/workflows/cancel-requested-with-non-cancellable.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/cancel-requested-with-non-cancellable.ts) ```ts const { httpGetJSON } = proxyActivities({ startToCloseTimeout: '10m', }); export async function resumeAfterCancellation(url: string): Promise { let result: any = undefined; const scope = new CancellationScope({ cancellable: false }); const promise = scope.run(() => httpGetJSON(url)); try { result = await Promise.race([scope.cancelRequested, promise]); } catch (err) { if (!(err instanceof CancelledFailure)) { throw err; } // Prevent Workflow from completing so Activity can complete result = await promise; } return result; } ``` ### Cancellation scopes and callbacks Callbacks are not particularly useful in Workflows because all meaningful asynchronous operations return promises. In the rare case that code uses callbacks and needs to handle cancellation, a callback can consume the `CancellationScope.cancelRequested` promise. [packages/test/src/workflows/cancellation-scopes-with-callbacks.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/cancellation-scopes-with-callbacks.ts) ```ts function doSomething(callback: () => any) { setTimeout(callback, 10); } export async function cancellationScopesWithCallbacks(): Promise { await new Promise((resolve, reject) => { doSomething(resolve); CancellationScope.current().cancelRequested.catch(reject); }); } ``` ### Nesting cancellation scopes You can achieve complex flows by nesting cancellation scopes. [packages/test/src/workflows/nested-cancellation.ts](https://github.com/temporalio/sdk-typescript/blob/main/packages/test/src/workflows/nested-cancellation.ts) ```ts const { setup, httpPostJSON, cleanup } = proxyActivities({ startToCloseTimeout: '10m', }); export async function nestedCancellation(url: string): Promise { await CancellationScope.cancellable(async () => { await CancellationScope.nonCancellable(() => setup()); try { await CancellationScope.withTimeout(1000, () => httpPostJSON(url, { some: 'data' })); } catch (err) { if (isCancellation(err)) { await CancellationScope.nonCancellable(() => cleanup(url)); } throw err; } }); } ``` ### Sharing promises between scopes Operations like Timers and Activities are cancelled by the cancellation scope they were created in. Promises returned by these operations can be awaited in different scopes. [activities-cancellation-heartbeating/src/cancellation-scopes.ts](https://github.com/temporalio/samples-typescript/blob/main/activities-cancellation-heartbeating/src/cancellation-scopes.ts) ```ts export async function sharedScopes(): Promise { // Start activities in the root scope const p1 = httpGetJSON('http://url1.ninja'); const p2 = httpGetJSON('http://url2.ninja'); const scopePromise = CancellationScope.cancellable(async () => { const first = await Promise.race([p1, p2]); // Does not cancel activity1 or activity2 as they're linked to the root scope CancellationScope.current().cancel(); return first; }); return await scopePromise; // The Activity that did not complete will effectively be cancelled when // Workflow completes unless the Activity is awaited: // await Promise.all([p1, p2]); } ``` [activities-cancellation-heartbeating/src/cancellation-scopes.ts](https://github.com/temporalio/samples-typescript/blob/main/activities-cancellation-heartbeating/src/cancellation-scopes.ts) ```ts export async function shieldAwaitedInRootScope(): Promise { let p: Promise | undefined = undefined; await CancellationScope.nonCancellable(async () => { p = httpGetJSON('http://example.com'); // <-- Start activity in nonCancellable scope without awaiting completion }); // Activity is shielded from cancellation even though it is awaited in the cancellable root scope return p; } ``` --- ## Debugging - TypeScript SDK The Debugging section of the Temporal TypeScript SDK developer's guide covers tools for debugging and how to troubleshoot common issues. ## How to debug in a development environment {#debug-in-a-development-environment} In addition to the normal development tools of logging and a debugger, you can also see what's happening in your Workflow by using the [Web UI](/web-ui) or [Temporal CLI](/cli). ## How to debug in a production environment {#debug-in-a-production-environment} You can debug production Workflows using: - [Web UI](/web-ui) - [Temporal CLI](/cli) - [Replay](/develop/typescript/testing-suite#replay) - [Tracing](/develop/typescript/observability#tracing) - [Logging](/develop/typescript/observability#logging) You can debug and tune Worker performance with metrics and the [Worker performance guide](/develop/worker-performance). For information on setting up SDK metrics, see [Metrics](/develop/typescript/observability#metrics) in the Observability section of the TypeScript SDK developer's guide. Debug Server performance with [Cloud metrics](/cloud/metrics/) or [self-hosted Server metrics](/self-hosted-guide/production-checklist#scaling-and-metrics). ## How to troubleshoot common issues in the TypeScript SDK {#troubleshoot-common-issues} {/* The following was ported from \docs-src\typescript\troubleshooting.md */} ### Two locations to watch - Workflow Errors are reflected in Temporal Web. - Worker errors and logs are reflected in the terminal. If something isn't behaving the way you expect, make sure to check both locations for helpful error messages. ### Stale Workflows If you are developing Workflows and finding that code isn't executing as expected, the first place to look is whether old Workflows are still running. If those old Workflows have the same name and are on the same task queue, Temporal will try to continue executing them on your new code by design. You may get errors that make no sense to you because - Temporal is trying to execute old Workflow code that no longer exists in your codebase, or - your new Client code is expecting Temporal to execute old Workflow/Activity code it doesn't yet know about. The biggest sign that this is happening is if you notice Temporal is acting non-deterministically: running the same Workflow twice gets different results. Stale workflows are usually a non-issue because the errors generated are just noise from code you no longer want to run. If you need to terminate old stale Workflows, you can do so with Temporal Web or the Temporal CLI. ### Workflow/Activity registration errors **If your Workflows or Activities are not imported or spelled correctly**, here are some errors we've seen: - `ApplicationFailure: 'MyFunction' is not a function` - `Workflow did not register a handler for MyQuery` Double check that your Workers are registering the right Workflow and Activity Definitions (function names) on the right Task Queues. **If you are running Temporal in a monorepo**, then your `node_modules` may be in a different location than where Temporal expects to find it by default, which results in errors like: ```bash [ERROR] Module not found: Error: Can't resolve '@temporalio/workflow/lib/worker-interface.js' in '/src' ``` Our [Next.js tutorial](https://learn.temporal.io/tutorials/typescript/nextjs) is written for people setting up Temporal **within an existing monorepo**, which may be of use here. When you pass a `workflowsPath`, our Webpack config expects to find `node_modules` in the same or a parent/ancestor directory. **If you are custom bundling your own Workflows** you may get errors like these: ```bash [ERROR] Failed to activate workflow { runId: 'aaf84a83-51ce-462a-9ab7-6a641a703bff', error: ReferenceError: exports is not defined, workflowExists: false } ``` Temporal Workflow Bundles need to [export a set of methods that fit the compiled `worker-interface.ts` from `@temporalio/workflow`](https://github.com/temporalio/sdk-typescript/blob/eaa2d205c9bc5ff4a3b17c0b34f2dcf6b1e0264a/packages/worker/src/workflow/bundler.ts#L81) as an entry point. We do offer a `bundleWorkflowCode` method to assist you with this, though it uses our Webpack settings. For more information, see the [Register types](/develop/typescript/core-application#register-types) section. ### Webpack errors The TypeScript SDK's Worker bundles Workflows based on `workflowsPath` with [Webpack](https://webpack.js.org/) and run them inside v8 isolates. If Webpack fails to create the bundle, the SDK will throw an error and emit webpack logs using the SDK's [logger](/develop/typescript/observability#logging). If you do not see Webpack output in your terminal make sure that you have not disabled SDK logging (see reference to `Runtime.install()` in the link above). **A common mistake for newcomers to the TypeScript SDK is trying to use Node.js built-ins and modules in their Workflow code.** Usually, the best thing to do is move that code to an Activity. Some common examples that will **not** work in the Workflow isolate:
Importing node built-in modules :::danger Antipattern ```ts const config = fs.readFileSync('config.json', 'utf8'); ``` ::: This is invalid because reading from the filesystem is a non-deterministic operation: the file may change from the time of the original Workflow Execution to when the Workflow is replayed. You'll typically see an error in this form in the Webpack output: ``` 2021-10-14T19:22:00.606Z [INFO] Module not found: Error: Can't resolve 'fs' in '/Users/you/your-project/src' 2021-10-14T19:22:00.606Z [INFO] resolve 'fs' in '/Users/you/your-project/src' 2021-10-14T19:22:00.606Z [INFO] Parsed request is a module 2021-10-14T19:22:00.606Z [INFO] using description file: /Users/you/your-project/package.json (relative path: ./src) 2021-10-14T19:22:00.606Z [INFO] Field 'browser' doesn't contain a valid alias configuration ```
Importing and calling Activities directly from Workflow code :::danger Antipattern ```ts export async function yourWorkflow(): Promise { return await makeHTTPRequest('https://temporal.io'); } ``` ::: This is invalid because activity implementations should not be directly referenced by Workflow code. Activities are used by Workflows in order make network calls and reading from the filesystem, operations which are non-deterministic by nature because they rely on external state. Temporal records Activity results in the Workflow history and in case your Workflow is replayed, completed Activities will not be rerun, instead their recorded result will be delivered to the Workflow. You'll typically see an error in this form in the Webpack output: ``` 2021-10-14T19:46:52.731Z [INFO] ERROR in ./src/activities.ts 8:31-46 2021-10-14T19:46:52.731Z [INFO] Module not found: Error: Can't resolve 'http' in '/Users/you/your-project/src' 2021-10-14T19:46:52.731Z [INFO] 2021-10-14T19:46:52.731Z [INFO] BREAKING CHANGE: webpack < 5 used to include polyfills for node.js core modules by default. 2021-10-14T19:46:52.731Z [INFO] This is no longer the case. Verify if you need this module and configure a polyfill for it. 2021-10-14T19:46:52.731Z [INFO] 2021-10-14T19:46:52.731Z [INFO] If you want to include a polyfill, you need to: 2021-10-14T19:46:52.731Z [INFO] - add a fallback 'resolve.fallback: { "http": require.resolve("stream-http") }' 2021-10-14T19:46:52.731Z [INFO] - install 'stream-http' 2021-10-14T19:46:52.731Z [INFO] If you don't want to include a polyfill, you can use an empty module like this: 2021-10-14T19:46:52.731Z [INFO] resolve.fallback: { "http": false } ``` To properly call your Activities from Workflow code use `proxyActivities` and make sure to only import the Activity types. ```ts const { makeHTTPRequest } = proxyActivities(); export async function yourWorkflow(): Promise { return await makeHTTPRequest('https://temporal.io'); } ```
### Works in Dev but not in Prod The two main sources of dev-prod discrepancies are in bundling and connecting. #### Production bundling You may experience your Client sending stripped names as the Workflow "Type" when scheduling a Workflow. Webpack can change the Workflow's function name to something shorter. Temporal won't know how to handle the mismatch between the shorter name and the expect Workflow type. You may experience errors like this: ``` Error: 3 INVALID_ARGUMENT: WorkflowType is not set on request. ``` Or you may see shorter names in the Temporal Service's Web UI when Webpack changed the Workflow's function name to something shorter, in this case the single letter 's': This issue can happen when your bundler strips out Workflow function names. Temporal relies on those names to set the "Workflow Type" in the Service Web UI. To prevent the build process from shortening Workflow function names, modify the webpack configuration file ( `webpack.config.js`) to set the Boolean that retains the original names in the `TerserPlugin` configuration section. Setting the option (`keep_fnames`) to `true` prevents name stripping. ```js // webpack.config.js module.exports = { optimization: { minimize: true, minimizer: [ new TerserPlugin({ terserOptions: { keep_fnames: true, // don't strip function names in production }, }), ], }, }; ``` ```js require('esbuild').buildSync({ entryPoints: ['app.js'], minify: true, keepNames: true, outfile: 'out.js', }); ``` See the [esbuild docs](https://esbuild.github.io/api/#keep-names) for more information. #### Connecting to Temporal Server If you are trying to connect in production and getting this: ```bash [TransportError: transport error] ``` It is a sign that something is wrong with your Cert/Key pair. Log it out and make sure it is an exact match with what is expected (often, the issue can be whitespace when injecting from your production secrets management environment). ### Resetting Workflows to deal with logical bugs You can "rewind time" using the Temporal CLI, resetting Workflow History to some previous point in time. You can read the Temporal CLI docs on: - [Restarting and resetting Workflows by ID](/cli) - [Resetting all Workflows by binary checksum identifier](/cli) If you need to reset programmatically, the TS SDK does not have any high level APIs for this, but you can make raw gRPC calls to [resetWorkflowExecution](https://typescript.temporal.io/api/classes/proto.temporal.api.workflowservice.v1.WorkflowService-1/#resetworkflowexecution). Resetting should only be used to deal with serious logical bugs in your code: it's not for handling transient failures, like a downstream service being unreachable. It should not be used in the course of normal application flows. ### gRPC call timeouts (context deadline exceeded) The opaque `context deadline exceeded` error comes from `gRPC`: ``` Error: 4 DEADLINE_EXCEEDED: context deadline exceeded at Object.callErrorFromStatus (/Users/swyx/Work/Temporal/samples-typescript/nextjs-oneclick/node_modules/@grpc/grpc-js/build/src/call.js:31:26) at Object.onReceiveStatus (/Users/swyx/Work/Temporal/samples-typescript/nextjs-oneclick/node_modules/@grpc/grpc-js/build/src/client.js:179:52) at Object.onReceiveStatus (/Users/swyx/Work/Temporal/samples-typescript/nextjs-oneclick/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:336:141) at Object.onReceiveStatus (/Users/swyx/Work/Temporal/samples-typescript/nextjs-oneclick/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:299:181) at /Users/swyx/Work/Temporal/samples-typescript/nextjs-oneclick/node_modules/@grpc/grpc-js/build/src/call-stream.js:145:78 at processTicksAndRejections (node:internal/process/task_queues:78:11) { code: 4, details: 'context deadline exceeded', metadata: Metadata { internalRepr: Map(1) { 'content-type' => [Array] }, options: {} }, page: '/api/getBuyState' } ``` Several conditions can cause this error, including network hiccups, timeouts that are too short, and an overloaded server. Querying a Workflow Execution whose query handler causes an error can result in the query call timing out. Some troubleshooting actions you can take: - Verify the connection from your Worker to the Temporal Server is working and doesn't have unusually high latency. - If you are running Temporal Server yourself, check your [server metrics](/self-hosted-guide/production-checklist#scaling-and-metrics) to ensure it's not overloaded. - If what's timing out is a query, check the logs of your Workers to see if they are having issues handling the query. If none of the preceding actions help you discover why timeouts are occurring, please try to produce a minimal repro and we'll be glad to help. --- ## Enriching the User Interface - TypeScript SDK Temporal supports adding context to Workflows and Events with metadata. This helps users identify and understand Workflows and their operations. ## Adding Summary and Details to Workflows ### Starting a Workflow When starting a Workflow, you can provide a static summary and details to help identify the workflow in the UI: ```typescript const client = new Client(); // Start a workflow with static summary and details const handle = await client.workflow.start(yourWorkflow, { args: ['workflow input'], taskQueue: 'your-task-queue', workflowId: 'your-workflow-id', staticSummary: 'Order processing for customer #12345', staticDetails: 'Processing premium order with expedited shipping' }); ``` `staticSummary` is a single-line description that appears in the workflow list view, limited to 200 bytes. `staticDetails` can be multi-line and provides more comprehensive information that appears in the workflow details view, with a larger limit of 20K bytes. The input format is standard Markdown excluding images, HTML, and scripts. You can also use the `execute` method with the same parameters: ```typescript const result = await client.workflow.execute(yourWorkflow, { args: ['workflow input'], taskQueue: 'your-task-queue', workflowId: 'your-workflow-id', staticSummary: 'Order processing for customer #12345', staticDetails: 'Processing premium order with expedited shipping' }); ``` ### Inside the Workflow Within a Workflow, you can get and set the _current workflow details_. Unlike static summary/details set at Workflow start, this value can be updated throughout the life of the Workflow. Current Workflow details also takes Markdown format (excluding images, HTML, and scripts) and can span multiple lines. ```typescript export async function yourWorkflow(input: string): Promise { // Get the current details const currentDetails = getCurrentDetails(); console.log(`Current details: ${currentDetails}`); // Set/update the current details setCurrentDetails('Updated workflow details with new status'); return 'Workflow completed'; } ``` ### Adding Summary to Activities and Timers You can attach a `summary` to activities by using `executeWithOptions` when calling them: ```typescript const { yourActivity } = proxyActivities({ startToCloseTimeout: '10 seconds' }); export async function yourWorkflow(input: string): Promise { // Execute an activity with a summary using executeWithOptions const result = await yourActivity.executeWithOptions( { staticSummary: 'Processing user data' }, [input] // Note: arguments must be passed as an array ); return result; } ``` Similarly, you can attach a `summary` to timers within a workflow: ```typescript export async function yourWorkflow(input: string): Promise { // Create a timer with a summary await sleep('5 minutes', { summary: 'Waiting for payment confirmation' }); return 'Timer completed'; } ``` The input format for `summary` is a string, and limited to 200 bytes. ## Viewing Summary and Details in the UI Once you've added summaries and details to your Workflows, Activities, and Timers, you can view this enriched information in the Temporal Web UI. Navigate to your Workflow's details page to see the metadata displayed in two key locations: ### Workflow Overview Section At the top of the workflow details page, you'll find the workflow-level metadata: - **Summary & Details** - Displays the static summary and static details set when starting the workflow - **Current Details** - Displays the dynamic details that can be updated during workflow execution All Workflow details support standard Markdown formatting (excluding images, HTML, and scripts), allowing you to create rich, structured information displays. ### Event History Individual events in the Workflow's Event History display their associated summaries when available. Workflow, Activity and Timer summaries appear in purple text next to their corresponding Events, providing immediate context without requiring you to expand the event details. When you do expand an Event, the summary is also prominently displayed in the detailed view. --- ## Entity pattern - TypeScript SDK ### Single-entity design pattern in TypeScript {#single-entity-pattern} The following is a simple pattern that represents a single entity. It tracks the number of iterations regardless of frequency, and calls `continueAsNew` while properly handling pending updates from Signals. ```ts interface Input { /* Define your Workflow input type here */ } interface Update { /* Define your Workflow update type here */ } const MAX_ITERATIONS = 1; export async function entityWorkflow( input: Input, isNew = true, ): Promise { try { const pendingUpdates = Array(); setHandler(updateSignal, (updateCommand) => { pendingUpdates.push(updateCommand); }); if (isNew) { await setup(input); } for (let iteration = 1; iteration <= MAX_ITERATIONS; ++iteration) { // Ensure that we don't block the Workflow Execution forever waiting // for updates, which means that it will eventually Continue-As-New // even if it does not receive updates. await condition(() => pendingUpdates.length > 0, '1 day'); while (pendingUpdates.length) { const update = pendingUpdates.shift(); await runAnActivityOrChildWorkflow(update); } } } catch (err) { if (isCancellation(err)) { await CancellationScope.nonCancellable(async () => { await cleanup(); }); } throw err; } await continueAsNew(input, false); } ``` --- ## Failure detection - TypeScript SDK feature guide This page shows how to do the following: - [Raise and Handle Exceptions](#exception-handling) - [Deliberately Fail Workflows](#workflow-failure) - [Workflow Timeouts](#workflow-timeouts) - [Workflow retries](#workflow-retries) - [Activity Timeouts](#activity-timeouts) - [Activity Retry Policy](#activity-retries) - [Activity next Retry delay](#activity-next-retry-delay) - [Heartbeat an Activity](#activity-heartbeats) - [Activity Heartbeat Timeout](#activity-heartbeat-timeout) ## Raise and Handle Exceptions {#exception-handling} In each Temporal SDK, error handling is implemented idiomatically, following the conventions of the language. Temporal uses several different error classes internally — for example, [`CancelledFailure`](https://typescript.temporal.io/api/classes/common.CancelledFailure) in the Typescript SDK, to handle a Workflow cancellation. You should not raise or otherwise implement these manually, as they are tied to Temporal platform logic. The one Temporal error class that you will typically raise deliberately is [`ApplicationFailure`](https://typescript.temporal.io/api/classes/common.ApplicationFailure). In fact, *any* other exceptions that are raised from your Typescript code in a Temporal Activity will be converted to an `ApplicationError` internally. This way, an error's type, severity, and any additional details can be sent to the Temporal Service, indexed by the Web UI, and even serialized across language boundaries. In other words, these two code samples do the same thing: ```typescript class InvalidChargeError extends Error { constructor(message: string) { super(message); this.name = "InvalidChargeError"; Object.setPrototypeOf(this, CustomError.prototype); } } if (chargeAmount < 0) { throw new InvalidChargeError(`Invalid charge amount: ${chargeAmount} (must be above zero)`); } ``` ```typescript if (chargeAmount < 0) { throw ApplicationFailure.create({ message: `Invalid charge amount: ${chargeAmount} (must be above zero)`, type: 'InvalidChargeError', }); } ``` Depending on your implementation, you may decide to use either method. One reason to use the Temporal `ApplicationFailure` class is because it allows you to set an additional `non_retryable` parameter. This way, you can decide whether an error should not be retried automatically by Temporal. This can be useful for deliberately failing a Workflow due to bad input data, rather than waiting for a timeout to elapse: ```typescript if (chargeAmount < 0) { throw ApplicationFailure.create({ message: `Invalid charge amount: ${chargeAmount} (must be above zero)`, nonRetryable: true }); } ``` You can alternately specify a list of errors that are non-retryable in your Activity [Retry Policy](#activity-retries). ## Failing Workflows {#workflow-failure} One of the core design principles of Temporal is that an Activity Failure will never directly cause a Workflow Failure — a Workflow should never return as Failed unless deliberately. The default retry policy associated with Temporal Activities is to retry them until reaching a certain timeout threshold. Activities will not actually *return* a failure to your Workflow until this condition or another non-retryable condition is met. At this point, you can decide how to handle an error returned by your Activity the way you would in any other program. For example, you could implement a [Saga Pattern](https://github.com/temporalio/samples-typescript/tree/main/saga) that uses `try` and `catch` blocks to "unwind" some of the steps your Workflow has performed up to the point of Activity Failure. **You will only fail a Workflow by manually raising an `ApplicationFailure` from the Workflow code.** You could do this in response to an Activity Failure, if the failure of that Activity means that your Workflow should not continue: ```typescript try { await addAddress(); } catch (err) { if (err instanceof ActivityFailure && err.cause instanceof ApplicationFailure) { log.error(err.cause.message); throw err; } } ``` This works differently in a Workflow than raising exceptions from Activities. In an Activity, any Typescript exceptions or custom exceptions are converted to a Temporal `ApplicationFailure`. In a Workflow, any exceptions that are raised other than an explicit Temporal `ApplicationFailure` will only fail that particular [Workflow Task](https://docs.temporal.io/tasks#workflow-task-execution) and be retried. This includes any typical Typescript runtime errors like an `undefined` error that are raised automatically. These errors are treated as bugs that can be corrected with a fixed deployment, rather than a reason for a Temporal Workflow Execution to return unexpectedly. ## Workflow Timeouts {#workflow-timeouts} **How to set Workflow Timeouts using the Temporal TypeScript SDK** Each Workflow timeout controls the maximum duration of a different aspect of a Workflow Execution. Before we continue, we want to note that we generally do not recommend setting Workflow Timeouts, because Workflows are designed to be long-running and resilient. Instead, setting a Timeout can limit its ability to handle unexpected delays or long-running processes. If you need to perform an action inside your Workflow after a specific period of time, we recommend using a Timer. Workflow Timeouts are set when starting a Workflow using either the Client or Workflow API. - **[Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout)** - restricts the maximum amount of time that a single Workflow Execution can be executed - **[Workflow Run Timeout](/encyclopedia/detecting-workflow-failures#workflow-run-timeout):** restricts the maximum amount of time that a single Workflow Run can last - **[Workflow Task Timeout](/encyclopedia/detecting-workflow-failures#workflow-task-timeout):** restricts the maximum amount of time that a Worker can execute a Workflow Task The following properties can be set on the [`WorkflowOptions`](https://typescript.temporal.io/api/interfaces/client.WorkflowOptions/) when starting a Workflow using either the Client or Workflow API: - [`workflowExecutionTimeout​`](https://typescript.temporal.io/api/interfaces/client.WorkflowOptions/#workflowexecutiontimeout) - [`workflowRunTimeout`](https://typescript.temporal.io/api/interfaces/client.WorkflowOptions/#workflowruntimeout) - [`workflowTaskTimeout`](https://typescript.temporal.io/api/interfaces/client.WorkflowOptions/#workflowtasktimeout) ```typescript await client.workflow.start(example, { taskQueue, workflowId, // Set Workflow Timeout duration workflowExecutionTimeout: '1 day', // workflowRunTimeout: '1 minute', // workflowTaskTimeout: '30 seconds', }); ``` ## Workflow retries {#workflow-retries} **How to set Workflow retries using the Temporal TypeScript SDK** A Retry Policy can work in cooperation with the timeouts to provide fine controls to optimize the execution experience. Use a [Retry Policy](/encyclopedia/retry-policies) to retry a Workflow Execution in the event of a failure. Workflow Executions do not retry by default, and Retry Policies should be used with Workflow Executions only in certain situations. The Retry Policy can be set through the [`WorkflowOptions.retry`](https://typescript.temporal.io/api/interfaces/client.WorkflowOptions/#retry) property when starting a Workflow using either the Client or Workflow API. ```typescript const handle = await client.workflow.start(example, { taskQueue, workflowId, retry: { maximumAttempts: 3, maximumInterval: '30 seconds', }, }); ``` ## Activity Timeouts {#activity-timeouts} **How to set Activity Timeouts using the Temporal TypeScript SDK** Each Activity Timeout controls the maximum duration of a different aspect of an Activity Execution. The following Timeouts are available in the Activity Options: - **[Schedule-To-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout):** is the maximum amount of time allowed for the entire [Activity Execution](/activity-execution), from when the [Activity Task](/tasks#activity-task) is initially scheduled by the Workflow to when the server receives a successful completion for that Activity Task - **[Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout):** is the maximum time allowed for a single [Activity Task Execution](/tasks#activity-task-execution), from when the Activity Task Execution gets polled by a [Worker](/workers#worker) to when the server receives a successful completion for that Activity Task - **[Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout):** is the maximum amount of time that is allowed from when an [Activity Task](/tasks#activity-task) is initially scheduled by the Workflow to when a [Worker](/workers#worker) polls the Activity Task Execution An Activity Execution must have either the Start-To-Close or the Schedule-To-Close Timeout set. The following properties can be set on the [`ActivityOptions`](https://typescript.temporal.io/api/interfaces/common.ActivityOptions) when creating Activity proxy functions using the [`proxyActivities()`](https://typescript.temporal.io/api/namespaces/workflow#proxyactivities) API: - [`scheduleToCloseTimeout`](https://typescript.temporal.io/api/interfaces/common.ActivityOptions/#scheduletoclosetimeout) - [`startToCloseTimeout`](https://typescript.temporal.io/api/interfaces/common.ActivityOptions/#starttoclosetimeout) - [`scheduleToStartTimeout`](https://typescript.temporal.io/api/interfaces/common.ActivityOptions/#scheduletostarttimeout) ```typescript const { myActivity } = proxyActivities({ scheduleToCloseTimeout: '5m', // startToCloseTimeout: "30s", // recommended // scheduleToStartTimeout: "60s", }); ``` ## Activity Retry Policy {#activity-retries} **How to set an Activity Retry Policy using the Temporal TypeScript SDK** A Retry Policy works in cooperation with the timeouts to provide fine controls to optimize the execution experience. Activity Executions are automatically associated with a default [Retry Policy](/encyclopedia/retry-policies) if a custom one is not provided. To set an Activity's Retry Policy in TypeScript, assign the [`ActivityOptions.retry`](https://typescript.temporal.io/api/interfaces/common.ActivityOptions#retry) property when creating the corresponding Activity proxy function using the [`proxyActivities()`](https://typescript.temporal.io/api/namespaces/workflow#proxyactivities) API. ```typescript const { myActivity } = proxyActivities({ // ... retry: { initialInterval: '10s', maximumAttempts: 5, }, }); ``` ## Activity next Retry delay {#activity-next-retry-delay} **How to override the next Retry delay following an Activity failure using the Temporal TypeScript SDK** The time to wait after a retryable Activity failure until the next retry is attempted is normally determined by that Activity's Retry Policy. However, an Activity may override that duration when explicitly failing with an [`ApplicationFailure`](https://typescript.temporal.io/api/classes/common.ApplicationFailure) by setting a next Retry delay. To override the next Retry delay for an `ApplicationFailure` thrown by an Activity in TypeScript, provide the [`nextRetryDelay`](https://typescript.temporal.io/api/interfaces/common.ApplicationFailureOptions#nextretrydelay) property on the object argument of the [`ApplicationFailure.create()`](https://typescript.temporal.io/api/classes/common.ApplicationFailure#create) factory method. ```typescript throw ApplicationFailure.create({ // ... nextRetryDelay: '15s', }); ``` ## Heartbeat an Activity {#activity-heartbeats} **How to Heartbeat an Activity using the Temporal TypeScript SDK** An [Activity Heartbeat](/encyclopedia/detecting-activity-failures#activity-heartbeat) is a ping from the [Worker Process](/workers#worker-process) that is executing the Activity to the [Temporal Service](/temporal-service). Each Heartbeat informs the Temporal Service that the [Activity Execution](/activity-execution) is making progress and the Worker has not crashed. If the Temporal Service does not receive a Heartbeat within a [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) time period, the Activity will be considered as timed out and another [Activity Task Execution](/tasks#activity-task-execution) may be scheduled according to the Retry Policy. Activity Cancellations are delivered to Activities from the Temporal Service when they Heartbeat. Activities that don't Heartbeat can't get notified of Cancellation requests. :::note Handling Activity Cancellation In the TypeScript SDK, Activity implementations must opt in to observe cancellation. Use one of the following in your Activity to be notified of cancellation: - `await Context.current().cancelled` — rejects with `CancelledFailure`. - `await Context.current().sleep(ms)` — rejects on cancellation. - Pass `Context.current().cancellationSignal` (an `AbortSignal`) to libraries that support abort. > **Important:** Activities receive cancellation notifications when they heartbeat; if an Activity doesn’t heartbeat, delivery of the cancellation notification can be delayed. See [Activity namespace reference](https://typescript.temporal.io/api/namespaces/activity#cancellation) and [Context API](https://typescript.temporal.io/api/classes/activity.Context) for more information. ::: Heartbeats may not always be sent to the Temporal Service—they may be [throttled](/encyclopedia/detecting-activity-failures#throttling) by the Worker. Heartbeat throttling may lead to Cancellation getting delivered later than expected. To Heartbeat an Activity Execution in TypeScript, call the [`heartbeat()`](https://typescript.temporal.io/api/namespaces/activity#heartbeat) function from the Activity implementation. ```typescript export async function myActivity(): Promise { for (let progress = 1; progress <= 1000; ++progress) { // Do something that takes time await sleep('1s'); heartbeat(); } } ``` An Activity may optionally checkpoint its progression, by providing a `details` argument to the [`heartbeat()`](https://typescript.temporal.io/api/namespaces/activity#heartbeat) function. Should the Activity Execution times out and gets retried, then the Temporal Server will provide the `details` from the last Heartbeat it received to the next Activity Execution. This can be used to allow the Activity to efficiently resume its work. ```typescript export async function myActivity(): Promise { // Resume work from latest heartbeat, if there's one, or start from 1 otherwise const startingPoint = activityInfo().heartbeatDetails?.progress ?? 1; for (let progress = startingPoint; progress <= 1000; ++progress) { // Do something that takes time await sleep('1s'); heartbeat({ progress }); } } ``` ## Activity Heartbeat Timeout {#activity-heartbeat-timeout} **How to set a Heartbeat Timeout using the Temporal TypeScript SDK** A [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout) works in conjunction with [Activity Heartbeats](/encyclopedia/detecting-activity-failures#activity-heartbeat). If the Temporal Server doesn't receive a Heartbeat before expiration of the Heartbeat Timeout, the Activity is considered as timed out and another [Activity Task Execution](/tasks#activity-task-execution) may be scheduled according to the Retry Policy. To set an Activity's Heartbeat Timeout in TypeScript, set the [`ActivityOptions.heartbeatTimeout`](https://typescript.temporal.io/api/interfaces/common.ActivityOptions#heartbeattimeout) property when creating the corresponding Activity proxy functions using the [`proxyActivities()`](https://typescript.temporal.io/api/namespaces/workflow#proxyactivities) API. ```typescript const { myLongRunningActivity } = proxyActivities({ // ... heartbeatTimeout: '30s', }); ``` --- ## TypeScript SDK developer guide ![TypeScript SDK Banner](/img/assets/banner-typescript-temporal.png) :::info TYPESCRIPT SPECIFIC RESOURCES Build Temporal Applications with the TypeScript SDK. **Temporal TypeScript Technical Resources:** - [TypeScript SDK Quickstart - Setup Guide](https://docs.temporal.io/develop/typescript/set-up-your-local-typescript) - [TypeScript API Documentation](https://typescript.temporal.io) - [TypeScript SDK Code Samples](https://github.com/temporalio/samples-typescript) - [TypeScript SDK GitHub](https://github.com/temporalio/sdk-typescript) - [Temporal 101 in TypeScript Free Course](https://learn.temporal.io/courses/temporal_101/typescript/) **Get Connected with the Temporal TypeScript Community:** - [Temporal TypeScript Community Slack](https://temporalio.slack.com/archives/C01DKSMU94L) - [TypeScript SDK Forum](https://community.temporal.io/tag/typescript-sdk) ::: ## [Core application](/develop/typescript/core-application) Use the essential components of a Temporal Application (Workflows, Activities, and Workers) to build and run a Temporal application. - [Develop a Basic Workflow](/develop/typescript/core-application#develop-workflows) - [Develop a Basic Activity](/develop/typescript/core-application#develop-activities) - [Start an Activity Execution](/develop/typescript/core-application#activity-execution) - [Run Worker Processes](/develop/typescript/core-application#run-a-dev-worker) ## [Temporal Client](/develop/typescript/temporal-client) Connect to a Temporal Service and start a Workflow Execution. - [Connect to Development Temporal Service](/develop/typescript/temporal-client#connect-to-development-service) - [Connect to Temporal Cloud](/develop/typescript/temporal-client#connect-to-temporal-cloud) - [Start a Workflow Execution](/develop/typescript/temporal-client#start-workflow-execution) ## [Testing](/develop/typescript/testing-suite) Set up the testing suite and test Workflows and Activities. - [Test Frameworks](/develop/typescript/testing-suite#test-frameworks) - [Testing Activities](/develop/typescript/testing-suite#test-activities) - [Testing Workflows](/develop/typescript/testing-suite#test-workflows) - [How to Replay a Workflow Execution](/develop/typescript/testing-suite#replay) ## [Failure detection](/develop/typescript/failure-detection) Explore how your application can detect failures using timeouts and automatically attempt to mitigate them with retries. - [Workflow Timeouts](/develop/typescript/failure-detection#workflow-timeouts) - [Set Activity Timeouts](/develop/typescript/failure-detection#activity-timeouts) - [Heartbeat an Activity](/develop/typescript/failure-detection#activity-heartbeats) ## [Workflow message passing](/develop/typescript/message-passing) Send messages to and read the state of Workflow Executions. - [Develop with Signals](/develop/typescript/message-passing#signals) - [Develop with Queries](/develop/typescript/message-passing#queries) - [What is a Dynamic Handler](/develop/typescript/message-passing#dynamic-handler) ## [Interrupt a Workflow feature guide](/develop/typescript/cancellation) Interrupt a Workflow Execution with a Cancel or Terminate action. - [Cancellation scopes in Typescript](/develop/typescript/cancellation#cancellation-scopes) - [Reset a Workflow](/develop/typescript/cancellation#reset): Resume a Workflow Execution from an earlier point in its Event History. ## [Asynchronous Activity Completion](/develop/typescript/asynchronous-activity-completion) Complete Activities asynchronously. - [Asynchronously Complete an Activity](/develop/typescript/asynchronous-activity-completion) ## [Versioning](/develop/typescript/versioning) Change Workflow Definitions without causing non-deterministic behavior in running Workflows. - [Introduction to Versioning](/develop/typescript/versioning) - [How to Use the Patching API](/develop/typescript/versioning#patching) ## [Observability](/develop/typescript/observability) Configure and use the Temporal Observability APIs. - [Emit Metrics](/develop/typescript/observability#metrics) - [Setup Tracing](/develop/typescript/observability#tracing) - [Log from a Workflow](/develop/typescript/observability#logging) - [Use Visibility APIs](/develop/typescript/observability#visibility) ## [Debugging](/develop/typescript/debugging) Explore various ways to debug your application. - [Debugging](/develop/typescript/debugging) ## [Schedules](/develop/typescript/schedules) Run Workflows on a schedule and delay the start of a Workflow. - [Schedule a Workflow](/develop/typescript/schedules#schedule-a-workflow) - [Temporal Cron Jobs](/develop/typescript/schedules#temporal-cron-jobs) - [How to use Start Delay](/develop/typescript/schedules#start-delay) ## [Data encryption](/develop/typescript/converters-and-encryption) Use compression, encryption, and other data handling by implementing custom converters and codecs. - [Custom Payload Codec](/develop/typescript/converters-and-encryption#custom-payload-conversion) ## [Temporal Nexus](/develop/typescript/nexus) The Temporal Nexus feature guide shows how to use Temporal Nexus to connect durable executions within and across Namespaces using a Nexus Endpoint, a Nexus Service contract, and Nexus Operations. - [Create a Nexus Endpoint to route requests from caller to handler](/develop/typescript/nexus#create-nexus-endpoint) - [Define the Nexus Service contract](/develop/typescript/nexus#define-nexus-service-contract) - [Develop a Nexus Service and Operation handlers](/develop/typescript/nexus#develop-nexus-service-operation-handlers) - [Develop a caller Workflow that uses a Nexus Service](/develop/typescript/nexus#develop-caller-workflow-nexus-service) - [Make Nexus calls across Namespaces with a dev Server](/develop/typescript/nexus#register-the-caller-workflow-in-a-worker-and-start-the-caller-workflow) - [Make Nexus calls across Namespaces in Temporal Cloud](/develop/typescript/nexus#nexus-calls-across-namespaces-temporal-cloud) ## [Durable Timers](/develop/typescript/timers) Use Timers to make a Workflow Execution pause or "sleep" for seconds, minutes, days, months, or years. - [What is a Timer](/develop/typescript/timers) ## [Child Workflows](/develop/typescript/child-workflows) Explore how to spawn a Child Workflow Execution and handle Child Workflow Events. - [Start a Child Workflow Execution](/develop/typescript/child-workflows) ## [Continue-As-New](/develop/typescript/continue-as-new) Continue the Workflow Execution with a new Workflow Execution using the same Workflow ID. - [Continue-As-New](/develop/typescript/continue-as-new) ## [Enriching the User Interface](/develop/typescript/enriching-ui) Add descriptive information to workflows and events for better visibility and context in the UI. - [Adding Summary and Details to Workflows](/develop/typescript/enriching-ui#adding-summary-and-details-to-workflows) ## [Interceptors](/develop/typescript/interceptors) Manage inbound and outbound SDK calls, enhance tracing, and add authorization to your Workflows and Activities. - [How to implement interceptors](/develop/typescript/interceptors#interceptors) - [Register an interceptor](/develop/typescript/interceptors#register-interceptor) ## [Vercel AI SDK Integration](/develop/typescript/integrations/ai-sdk) Integrate the Vercel AI SDK with Temporal to build durable AI agents and AI-powered applications. - [Vercel AI SDK Integration](/develop/typescript/integrations/ai-sdk) --- ## AI SDK by Vercel Integration Temporal's integration with [Vercel's AI SDK](https://ai-sdk.dev/) lets you use the AI SDK's API directly in Workflow code while Temporal handles Durable Execution. Like all API calls, LLM API calls are non-deterministic. In a [Temporal Application](/glossary#temporal-application), that means you cannot make LLM calls directly from a [Workflow](/glossary#workflow); they must run as [Activities](/glossary#activity). The AI SDK plugin handles this automatically: when you call methods in the AI SDK such as `generateText()`, the plugin wraps those calls in Activities behind the scenes. This preserves the Vercel AI SDK's developer experience that you are already familiar with while Temporal handles Durable Execution for you. All code snippets in this guide are taken from the TypeScript SDK [ai-sdk samples](https://github.com/temporalio/samples-typescript/tree/main/ai-sdk). Refer to the samples for the complete code and run them locally. :::info The Vercel AI SDK Integration is in Public Preview. Refer to the [Temporal product release stages guide](/evaluate/development-production-features/release-stages) for more information. ::: ## Prerequisites - This guide assumes you are already familiar with the Vercel AI SDK. If you aren't, refer to the [Vercel AI SDK documentation](https://ai-sdk.dev/) for more details. - If you are new to Temporal, we also recommend you read the [Understanding Temporal](/evaluate/understanding-temporal) document or take the [Temporal 101](https://learn.temporal.io/courses/temporal_101/) course to understand the basics of Temporal. - Ensure you have set up your local development environment by following the [Set up your local with the TypeScript SDK](/develop/typescript/set-up-your-local-typescript) guide. When you are done, leave the Temporal Development Server running if you want to test your code locally. ## Configure Workers to use the AI SDK Workers are the compute layer of a Temporal Application. They are responsible for executing the code that defines your [Workflows](/glossary#workflow) and [Activities](/glossary#activity). Before you can execute a Workflow or Activity with the Vercel AI SDK, you need to create a Worker and configure it to use the AI SDK plugin. Follow the steps below to configure your Worker. 1. Install the `@temporalio/ai-sdk` package. ```bash npm install @temporalio/ai-sdk ``` 2. Create a `worker.ts` file and configure the Worker to use the AI SDK plugin. ```ts {9-11} //... other import statements, initializing a connection // to the Temporal Service to be used by the Worker const worker = await Worker.create({ plugins: [ new AiSDKPlugin({ modelProvider: openai, }), ], connection, namespace: 'default', taskQueue: 'ai-sdk', workflowsPath: require.resolve('./workflows'), activities, }); // ... code that runs the worker ``` The `modelProvider` specifies which AI provider to use when creating models. Choose the provider that best suits your needs. In the Worker options, you are also specifying that the Worker polls the `ai-sdk` Task Queue for work in the `default` Namespace. Make sure that you configure your Client application to use the same Task Queue and Namespace. 3. Run the Worker. This Worker will now poll the Temporal Service for work on the `ai-sdk` Task Queue in the `default` Namespace until you stop it. ```bash nodemon worker.ts ``` You must ensure the Worker process has access to your API credentials. Most provider SDKs read credentials from environment variables. Refer to the [Vercel AI SDK documentation](https://ai-sdk.dev/providers/ai-sdk-providers) for instructions on how to set up your environment variables for the provider you chose. :::tip You only need to give provider credentials to the Worker process. The client application, meaning the application that sends requests to the Temporal Service to start Workflow Executions, doesn't need to know about the credentials. ::: See the full example at [ai-sdk samples](https://github.com/temporalio/samples-typescript/tree/main/ai-sdk). ## Develop a Simple Haiku Agent To help you get started, you can develop a simple Haiku Agent that generates haikus based on a prompt. If you weren't using Temporal, you would write code like this to generate a haiku: ```ts async function haikuAgent(prompt: string): Promise { const result = await generateText({ model: openai('gpt-4o-mini'), prompt, system: 'You only respond in haikus.', }); return result.text; } ``` To add Durable Execution to your agent, implement the agent as a Temporal Workflow. Use the AI SDK as you normally would, but pass `temporalProvider.languageModel()` as the model. The string you provide (like `'gpt-4o-mini'`) is passed to your configured `modelProvider` to create the model. ```ts {2,6} export async function haikuAgent(prompt: string): Promise { const result = await generateText({ model: temporalProvider.languageModel('gpt-4o-mini'), prompt, system: 'You only respond in haikus.', }); return result.text; } ``` With only two line changes, you have added Durable Execution to your agent. Your agent now gets automatic retries, timeouts, and the ability to run for extended periods without losing state if the process crashes. ## Provide your durable agent with tools The Vercel AI SDK lets you provide tools to your agents, and when the model calls them, they execute in the Workflow. Since tool functions run in Workflow context, they must follow Workflow rules. That means they must call Activities or Child Workflows to perform non-deterministic operations like API calls. For example, if you want to call an external API to get the weather, you would implement it as an Activity and call it from the tool function. The following is an example of an Activity that gets the weather for a given location: [ai-sdk/src/activities.ts](https://github.com/temporalio/samples-typescript/blob/main/ai-sdk/src/activities.ts) ```ts export async function getWeather(input: { location: string; }): Promise<{ city: string; temperatureRange: string; conditions: string }> { console.log('Activity execution'); return { city: input.location, temperatureRange: '14-20C', conditions: 'Sunny with wind.', }; } ``` Then in your agent implementation, provide the tool to the model using the `tools` option and instruct the model to use the tool when needed. ```ts {15-23} const { getWeather } = proxyActivities({ startToCloseTimeout: '1 minute', }); export async function toolsAgent(question: string): Promise { const result = await generateText({ model: temporalProvider.languageModel('gpt-4o-mini'), prompt: question, system: 'You are a helpful agent.', tools: { getWeather: tool({ description: 'Get the weather for a given city', inputSchema: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: getWeather, }), }, stopWhen: stepCountIs(5), }); return result.text; } ``` ## Integrate with Model Context Protocol (MCP) servers [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) is an open standard that lets AI applications connect to external tools and data sources. Calls to MCP servers, being calls to external APIs, are non-deterministic and would usually need to be implemented as Activities. The Temporal AI SDK integration handles this for you and provides a built-in implementation of a stateless MCP client that you can use inside Workflows. Follow the steps below to integrate your agent with an MCP server. 1. Create a connection to the MCP servers using the `experimental_createMCPClient` function from the `@ai-sdk/mcp` package. You can register multiple MCP servers by providing multiple factory functions in `mcpClientFactories`. ```ts const mcpClientFactories = { testServer: () => createMCPClient({ transport: new StdioClientTransport({ command: 'node', args: ['lib/mcp-server.js'], }), }), }; ``` The example uses `StdioClientTransport` as the transport mechanisms for client-server communication. Each time the Worker processes a Task that requires communication with the MCP server, it will start the server process and connect to it as required by the Task. 2. Configure the Worker to use the MCP client factories. ```ts {5} const worker = await Worker.create({ plugins: [ new AiSDKPlugin({ modelProvider: openai, mcpClientFactories }), ]}, ... ); ``` 3. In your agent Workflow, use `TemporalMCPClient` to get tools from the MCP server by referencing it by name: ```ts {4-5,9} export async function mcpAgent(prompt: string): Promise { const mcpClient = new TemporalMCPClient({ name: 'testServer' }); const tools = await mcpClient.tools(); const result = await generateText({ model: temporalProvider.languageModel('gpt-4o-mini'), prompt, tools, system: 'You are a helpful agent, You always use your tools when needed.', stopWhen: stepCountIs(5), }); return result.text; } ``` Both listing tools and calling them run as Activities behind the scenes, giving you automatic retries, timeouts, and full observability. --- ## AI Integrations(Integrations) Temporal TypeScript SDK provides integrations with the following tools and services: - [AI SDK by Vercel](/develop/typescript/integrations/ai-sdk) --- ## Manage Interceptors - TypeScript SDK Interceptors are a mechanism for modifying inbound and outbound SDK calls. Interceptors are commonly used to add tracing and authorization to the scheduling and execution of Workflows and Activities. You can compare these to "middleware" in other frameworks. ## How to implement interceptors in TypeScript {#interceptors} The TypeScript SDK comes with an optional interceptor package that adds tracing with [OpenTelemetry](https://www.npmjs.com/package/@temporalio/interceptors-opentelemetry). See how to use it in the [interceptors-opentelemetry](https://github.com/temporalio/samples-typescript/tree/main/interceptors-opentelemetry) code sample. - [WorkflowInboundCallsInterceptor](https://typescript.temporal.io/api/interfaces/workflow.WorkflowInboundCallsInterceptor/): Intercept Workflow inbound calls like execution, Signals, and Queries. - [WorkflowOutboundCallsInterceptor](https://typescript.temporal.io/api/interfaces/workflow.WorkflowOutboundCallsInterceptor/): Intercept Workflow outbound calls to Temporal APIs like scheduling Activities and starting Timers. - [ActivityInboundCallsInterceptor](https://typescript.temporal.io/api/interfaces/worker.ActivityInboundCallsInterceptor): Intercept inbound calls to an Activity (such as `execute`). - [WorkflowClientInterceptor](https://typescript.temporal.io/api/interfaces/client.WorkflowClientInterceptor/): Intercept workflow-related methods of [`Client`](https://typescript.temporal.io/api/classes/client.Client/) and [`WorkflowHandle`](https://typescript.temporal.io/api/interfaces/client.WorkflowHandle) like starting or signaling a Workflow. Interceptors are run in a chain, and all interceptors work similarly. They accept two arguments: `input` and `next`, where `next` calls the next interceptor in the chain. All interceptor methods are optional—it's up to the implementor to choose which methods to intercept. ## Interceptor examples **Log start and completion of Activities** ```ts ActivityInput, Next, WorkflowOutboundCallsInterceptor, } from '@temporalio/workflow'; export class ActivityLogInterceptor implements WorkflowOutboundCallsInterceptor { constructor(public readonly workflowType: string) {} async scheduleActivity( input: ActivityInput, next: Next, ): Promise { console.log('Starting activity', { activityType: input.activityType }); try { return await next(input); } finally { console.log('Completed activity', { workflow: this.workflowType, activityType: input.activityType, }); } } } ``` **Authorization** ```ts defaultDataConverter, Next, WorkflowInboundCallsInterceptor, WorkflowInput, } from '@temporalio/workflow'; /** * WARNING: This demo is meant as a simple auth example. * Do not use this for actual authorization logic. * Auth headers should be encrypted and credentials * stored outside of the codebase. */ export class DumbWorkflowAuthInterceptor implements WorkflowInboundCallsInterceptor { public async execute( input: WorkflowInput, next: Next, ): Promise { const authHeader = input.headers.auth; const { user, password } = authHeader ? await defaultDataConverter.fromPayload(authHeader) : undefined; if (!(user === 'admin' && password === 'admin')) { throw new Error('Unauthorized'); } return await next(input); } } ``` To properly do authorization from Workflow code, the Workflow would need to access encryption keys and possibly authenticate against an external user database, which requires the Workflow to break isolation. Please contact us if you need to discuss this further. ## Register an Interceptor {#register-interceptor} ### Activity and client interceptors registration - Activity interceptors are registered on Worker creation by passing an array of [ActivityInboundCallsInterceptor factory functions](https://typescript.temporal.io/api/interfaces/worker.ActivityInboundCallsInterceptorFactory) through [WorkerOptions](https://typescript.temporal.io/api/interfaces/worker.WorkerOptions#interceptors). - Client interceptors are registered on `Client` construction by passing an array of [WorkflowClientInterceptor](https://typescript.temporal.io/api/interfaces/client.WorkflowClientInterceptor) via [ClientOptions.interceptors](https://typescript.temporal.io/api/interfaces/client.ClientOptions#interceptors). ### Workflow interceptors registration Workflow interceptor registration is different from the other interceptors because they run in the Workflow isolate. To register Workflow interceptors, export an `interceptors` function from a file located in the `workflows` directory and provide the name of that file to the Worker on creation via [WorkerOptions](https://typescript.temporal.io/api/interfaces/worker.WorkerOptions#interceptors). At the time of construction, the Workflow context is already initialized for the current Workflow. You may call the [`workflowInfo()`](https://typescript.temporal.io/api/namespaces/workflow#workflowinfo) function to access Workflow-specific information from an interceptor. `src/workflows/your-interceptors.ts` ```ts export const interceptors = () => ({ outbound: [new ActivityLogInterceptor(workflowInfo().workflowType)], inbound: [], }); ``` `src/worker/index.ts` ```ts const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), interceptors: { workflowModules: [require.resolve('./workflows/your-interceptors')], }, }); ``` --- ## Workflow message passing - TypeScript SDK A Workflow can act like a stateful web service that receives messages: Queries, Signals, and Updates. The Workflow implementation defines these endpoints via handler methods that can react to incoming messages and return values. Temporal Clients use messages to read Workflow state and control its execution. See [Workflow message passing](/encyclopedia/workflow-message-passing) for a general overview of this topic. This page introduces these features for the Temporal Typescript SDK. ## Write message handlers {#writing-message-handlers} :::info The code that follows is part of a working [message-passing sample](https://github.com/temporalio/samples-typescript/tree/main/message-passing/introduction). ::: Follow these guidelines when writing your message handlers: - Define a message type as a global variable using [`defineQuery`](https://typescript.temporal.io/api/namespaces/workflow#definequery), [`defineSignal`](https://typescript.temporal.io/api/namespaces/workflow#definesignal), or [`defineUpdate`](https://typescript.temporal.io/api/namespaces/workflow#defineupdate). This is what your client code will use to send a message to the workflow. - Message handlers are defined by calling [`workflow.setHandler`](https://typescript.temporal.io/api/namespaces/workflow#sethandler) in your Workflow function. - The parameters and return values of handlers and the main Workflow function must be [serializable](/dataconversion). - Prefer using a single object over multiple input parameters. A single object allows you to add fields without changing the signature. ### Query handlers {#queries} A [Query](/sending-messages#sending-queries) is a synchronous operation that retrieves state from a Workflow Execution: ```typescript export enum Language { ARABIC = 'ARABIC', CHINESE = 'CHINESE', ENGLISH = 'ENGLISH', FRENCH = 'FRENCH', HINDI = 'HINDI', PORTUGUESE = 'PORTUGUESE', SPANISH = 'SPANISH', } interface GetLanguagesInput { includeUnsupported: boolean; } // 👉 Use the object returned by defineQuery to set the query handler in // Workflow code, and when sending the Query in Client code. export const getLanguages = wf.defineQuery('getLanguages'); export async function greetingWorkflow(): Promise { const greetings: Partial> = { [Language.CHINESE]: '你好,世界', [Language.ENGLISH]: 'Hello, world', }; wf.setHandler(getLanguages, (input: GetLanguagesInput): Language[] => { // 👉 A Query handler returns a value: it must not mutate the Workflow state // and can't perform async operations. if (input.includeUnsupported) { return Object.values(Language); } else { return Object.keys(greetings) as Language[]; } }); ... } ``` - A Query handler cannot be `async`. You can't perform async operations like executing an Activity in a Query handler. - `setHandler` can take `QueryHandlerOptions` (such as `description`) as described in the API reference docs for [`workflow.setHandler`](https://typescript.temporal.io/api/namespaces/workflow#sethandler). ### Signal handlers {#signals} A [Signal](/sending-messages#sending-signals) is an asynchronous message sent to a running Workflow Execution to change its state and control its flow: ```typescript // 👉 Use the object returned by defineSignal to set the Signal handler in // Workflow code, and to send the Signal from Client code. export const approve = wf.defineSignal<[ApproveInput]>('approve'); export async function greetingWorkflow(): Promise { let approvedForRelease = false; let approverName: string | undefined; wf.setHandler(approve, (input) => { // 👉 A Signal handler mutates the Workflow state but cannot return a value. approvedForRelease = true; approverName = input.name; }); ... } ... ``` - The handler cannot return a value. The response is sent immediately from the server, without waiting for the Workflow to process the Signal. - Signal and Update handlers can be `async`. This allows you to use Activities, Child Workflows, durable [`workflow.sleep`](https://typescript.temporal.io/api/namespaces/workflow#sleep) Timers, [`workflow.condition`](https://typescript.temporal.io/api/namespaces/workflow#condition) conditions, and more. See [Async handlers](#async-handlers) and [Workflow message passing](/encyclopedia/workflow-message-passing) for guidelines on safely using async Signal and Update handlers. - If your Workflow needs to do some async initialization before handling a Signal or Update, use [`workflow.condition`](https://typescript.temporal.io/api/namespaces/workflow#condition) inside your handler to wait until initialization has completed. - `setHandler` can take `SignalHandlerOptions` (such as `description` and `unfinishedPolicy`) as described in the API reference docs for [`workflow.setHandler`](https://typescript.temporal.io/api/namespaces/workflow#sethandler). ### Update handlers and validators {#updates} An [Update](/sending-messages#sending-updates) is a trackable synchronous request sent to a running Workflow Execution. It can change the Workflow state, control its flow, and return a result. The sender must wait until the Worker accepts or rejects the Update. The sender may wait further to receive a returned value or an exception if something goes wrong: ```typescript // 👉 Use the object returned by defineUpdate to set the Update handler in // Workflow code, and to send Updates from Client code. export const setLanguage = wf.defineUpdate('setLanguage'); export async function greetingWorkflow(): Promise { const greetings: Partial> = { [Language.CHINESE]: '你好,世界', [Language.ENGLISH]: 'Hello, world', }; let language = Language.ENGLISH; wf.setHandler( setLanguage, (newLanguage: Language) => { // 👉 An Update handler can mutate the Workflow state and return a value. const previousLanguage = language; language = newLanguage; return previousLanguage; }, { validator: (newLanguage: Language) => { // 👉 Update validators are optional if (!(newLanguage in greetings)) { throw new Error(`${newLanguage} is not supported`); } }, } ); ... } ``` - `setHandler` can take `UpdateHandlerOptions` (such as `validator`, `description` and `unfinishedPolicy`) as described in the API reference docs for [`workflow.setHandler`](https://typescript.temporal.io/api/namespaces/workflow#sethandler). - About validators: - Use validators to reject an Update before it is written to History. Validators are always optional. If you don't need to reject Updates, you don't need a validator. - To set a validator, pass the validator function in `UpdateHandlerOptions` when calling [`workflow.setHandler`](https://typescript.temporal.io/api/namespaces/workflow#sethandler). The validator must be a non-async function that accepts the same argument types as the handler and returns `void`. - Accepting and rejecting Updates with validators: - To reject an Update, throw an error of any type in the validator. - Without a validator, Updates are always accepted. - Validators and Event History: - The `WorkflowExecutionUpdateAccepted` event is written into History whether the acceptance was automatic or due to a validator function not throwing an error. - When a Validator throws an error, the Update is rejected and `WorkflowExecutionUpdateAccepted` _won't_ be added to the Event History. The caller receives an "Update failed" error. - Use [`workflow.currentUpdateInfo`](https://typescript.temporal.io/api/namespaces/workflow#current_update_info) to obtain information about the current Update. This includes the Update ID, which can be useful for deduplication when using Continue-As-New: see [Ensuring your messages are processed exactly once](/handling-messages#exactly-once-message-processing). - Update and Signal handlers can be `async`, letting them use Activities, Child Workflows, durable [`workflow.sleep`](https://typescript.temporal.io/api/namespaces/workflow#sleep) Timers, [`workflow.condition`](https://typescript.temporal.io/api/namespaces/workflow#condition) conditions, and more. See [Async handlers](#async-handlers) and [Workflow message passing](/encyclopedia/workflow-message-passing) for safe usage guidelines. - If your Workflow needs to do some async initialization before handling an Update or Signal, use [`workflow.condition`](https://typescript.temporal.io/api/namespaces/workflow#condition) inside your handler to wait until initialization has completed. ## Send messages {#send-messages} To send Queries, Signals, or Updates, you call methods on a [WorkflowHandle](https://typescript.temporal.io/api/namespaces/client#workflowhandle) object: - Use [`client.workflow.start`](https://typescript.temporal.io/api/classes/client.WorkflowClient#start) and return its handle. - Use [`client.workflow.getHandle`](https://typescript.temporal.io/api/classes/client.WorkflowClient#gethandle) to retrieve a Workflow handle by its Workflow Id. For example: ```typescript const handle = await client.workflow.start(greetingWorkflow, { taskQueue: 'my-task-queue', args: [myArg], workflowId: 'my-workflow-id', }); ``` To check the argument types required when sending messages -- and the return type for Queries and Updates -- refer to the corresponding handler method in the Workflow Definition. :::warning Using Continue-as-New and Updates - Temporal _does not_ support Continue-as-New functionality within Update handlers. - Complete all handlers _before_ using Continue-as-New. - Use Continue-as-New from your main Workflow Definition method, just as you would complete or fail a Workflow Execution. ::: ### Send a Query {#send-query} Use [`WorkflowHandle.query`](https://typescript.temporal.io/api/interfaces/client.WorkflowHandle/#query) to send a Query to a Workflow Execution: ```typescript const supportedLanguages = await handle.query(getLanguages, { includeUnsupported: false, }); ``` - Sending a Query doesn’t add events to a Workflow's Event History. - You can send Queries to closed Workflow Executions within a Namespace's Workflow retention period. This includes Workflows that have completed, failed, or timed out. Querying terminated Workflows is not safe and, therefore, not supported. - A Worker must be online and polling the Task Queue to process a Query. ### Send a Signal {#send-signal} You can send a Signal to a Workflow Execution from a Temporal Client or from another Workflow Execution. However, you can only send Signals to Workflow Executions that haven’t closed. #### Send a Signal from a Client {#send-signal-from-client} Use [WorkflowHandle.signal](https://typescript.temporal.io/api/interfaces/client.WorkflowHandle#signal) to send a Signal: ```typescript await handle.signal(greetingWorkflow.approve, { name: 'me' }); ``` - The call returns when the server accepts the Signal; it does _not_ wait for the Signal to be delivered to the Workflow Execution. - The [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the Workflow's Event History. ### Send a Signal from a Workflow {#send-signal-from-workflow} A Workflow can send a Signal to another Workflow, in which case it's called an _External Signal_. Use [`getExternalWorkflowHandle`](https://typescript.temporal.io/api/namespaces/workflow#getExternalWorkflowHandle): ```typescript export async function yourWorkflowThatSignals() { const handle = getExternalWorkflowHandle('workflow-id-123'); await handle.signal(joinSignal, { userId: 'user-1', groupId: 'group-1' }); } ``` When an External Signal is sent: - A [SignalExternalWorkflowExecutionInitiated](/references/events#signalexternalworkflowexecutioninitiated) Event appears in the sender's Event History. - A [WorkflowExecutionSignaled](/references/events#workflowexecutionsignaled) Event appears in the recipient's Event History. The `getExternalWorkflowHandle` method helps ensure that Workflows remain deterministic. Recall that one aspect of deterministic Workflows means not directly making network calls from the Workflow. This means that developers cannot use a Temporal Client directly within the Workflow code to send Signals or start other Workflows. Instead, to communicate between Workflows, we use `getExternalWorkflowHandle` to both ensure that Workflows remain deterministic and also that these interactions are recorded as Events in the Workflow's Event History. ### Signal-With-Start {#signal-with-start} Signal-With-Start allows a Client to send a Signal to a Workflow Execution, starting the Execution if it is not already running. Use [`Client.workflow.signalWithStart`](https://typescript.temporal.io/api/classes/client.WorkflowClient#signalwithstart): ```typescript const client = new Client(); await client.workflow.signalWithStart(yourWorkflow, { workflowId: 'workflow-id-123', taskQueue: 'my-taskqueue', args: [{ foo: 1 }], signal: joinSignal, signalArgs: [{ userId: 'user-1', groupId: 'group-1' }], }); ``` Signal-With-Start is limited to Client use. It cannot be called from a Workflow. ### Send an Update {#send-update-from-client} An Update is a synchronous, blocking call that can change Workflow state, control its flow, and return a result. A client sending an Update must wait until the Server delivers the Update to a Worker. Workers must be available and responsive. If you need a response as soon as the Server receives the request, use a Signal instead. Also note that you can't send Updates to other Workflow Executions or perform an Update equivalent of Signal-With-Start. - `WorkflowExecutionUpdateAccepted` is added to the Event History when the Worker confirms that the Update passed validation. - `WorkflowExecutionUpdateCompleted` is added to the Event History when the Worker confirms that the Update has finished. To send an Update to a Workflow Execution, you can: - Call [`WorkflowHandle.executeUpdate`](https://typescript.temporal.io/api/interfaces/client.WorkflowHandle/#executeUpdate) and wait for the Update to complete. This code fetches an Update result: ```typescript let previousLanguage = await handle.executeUpdate(setLanguage, { args: [Language.CHINESE], }); ``` - Send [`WorkflowHandle.startUpdate`](https://typescript.temporal.io/api/interfaces/client.WorkflowHandle/#startUpdate) to receive an [`WorkflowUpdateHandle`](https://typescript.temporal.io/api/interfaces/client.WorkflowUpdateHandle) as soon as the Update is accepted or rejected. - Use this `UpdateHandle` later to fetch your results. - `async` Update handlers normally perform long-running asynchronous operations, such as calling an Activity. - `startUpdate` only waits until the Worker has accepted or rejected the Update, not until all asynchronous operations are complete. For example: ```typescript const updateHandle = await handle.startUpdate(setLanguage, { args: [Language.ENGLISH], waitForStage: WorkflowUpdateStage.ACCEPTED, }); previousLanguage = await updateHandle.result(); ``` For more details, see the "Async handlers" section. To obtain an Update handle, you can: - Use [`WorkflowHandle.startUpdate`](https://typescript.temporal.io/api/interfaces/client.WorkflowHandle/#startUpdate) to start an Update and return the handle, as shown in the preceding example. - Use [`getUpdateHandle`](https://typescript.temporal.io/api/interfaces/client.WorkflowHandle/#getupdatehandle) to fetch a handle for an in-progress Update using the Update ID. #### Update-With-Start {#update-with-start} :::tip For open source server users, Temporal Server version [Temporal Server version 1.28](https://github.com/temporalio/temporal/releases/tag/v1.28.0) is recommended. ::: [Update-with-Start](/sending-messages#update-with-start) lets you [send an Update](/develop/typescript/message-passing#send-update-from-client) that checks whether an already-running Workflow with that ID exists: - If the Workflow exists, the Update is processed. - If the Workflow does not exist, a new Workflow Execution is started with the given ID, and the Update is processed before the main Workflow method starts to execute. Use [`executeUpdateWithStart`](https://typescript.temporal.io/api/classes/client.WorkflowClient#executeUpdateWithStart) to start an Update and wait for the result in one go. Alternatively, use [`startUpdateWithStart`](https://typescript.temporal.io/api/classes/client.WorkflowClient#startUpdateWithStart) to start an Update and receive a [`WorkflowUpdateHandle`](https://typescript.temporal.io/api/interfaces/client.WorkflowUpdateHandle), and then use `await updateHandle.result()` to retrieve the result from the Update. These calls return once the requested Update wait stage has been reached, or when the request times out. You will need to provide a [`WithStartWorkflowOperation`](https://typescript.temporal.io/api/classes/client.WithStartWorkflowOperation) to define the Workflow that will be started if necessary, and its arguments. You must specify a [WorkflowIdConflictPolicy](/workflow-execution/workflowid-runid#workflow-id-conflict-policy) when creating the `WithStartWorkflowOperation`. Note that a `WithStartWorkflowOperation` can only be used once. Here's an example taken from the [early-return](https://github.com/temporalio/samples-typescript/tree/main/early-return) sample: ```typescript const startWorkflowOperation = new WithStartWorkflowOperation.create( transactionWorkflow, { workflowId, args: [transactionID], taskQueue: 'early-return', workflowIdConflictPolicy: 'FAIL', }, ); const earlyConfirmation = await client.workflow.executeUpdateWithStart( getTransactionConfirmation, { startWorkflowOperation, }, ); const wfHandle = await startWorkflowOperation.workflowHandle(); const finalReport = await wfHandle.result(); ``` :::info SEND MESSAGES WITHOUT TYPE SAFETY In real-world development, sometimes you may be unable to import message type objects defined by `defineQuery`, `defineSignal`, or `defineUpdate`. When you don't have access to the Workflow Definition or it isn't written in Typescript, you can still use APIs that aren't type-safe, and dynamic method invocation. Pass message type names instead of message type objects to: - [`client.workflow.start`](https://typescript.temporal.io/api/classes/client.WorkflowClient#start) - [`WorkflowHandle.query`](https://typescript.temporal.io/api/interfaces/client.WorkflowHandle/#query) - [WorkflowHandle.signal](https://typescript.temporal.io/api/interfaces/client.WorkflowHandle#signal) - [`WorkflowHandle.executeUpdate`](https://typescript.temporal.io/api/interfaces/client.WorkflowHandle/#executeUpdate) - [`WorkflowHandle.startUpdate`](https://typescript.temporal.io/api/interfaces/client.WorkflowHandle/#startUpdate) Pass Workflow IDs to these APIs to get Workflow handles: - [`client.workflow.getHandle`](https://typescript.temporal.io/api/classes/client.WorkflowClient#gethandle) - [`getExternalWorkflowHandle`](https://typescript.temporal.io/api/namespaces/workflow#getExternalWorkflowHandle). ::: ## Message handler patterns {#message-handler-patterns} This section covers common write operations, such as Signal and Update handlers. It doesn't apply to pure read operations, like Queries or Update Validators. :::tip For additional information, see [Inject work into the main Workflow](/handling-messages#injecting-work-into-main-workflow), [Ensuring your messages are processed exactly once](/handling-messages#exactly-once-message-processing), and [this sample](https://github.com/temporalio/samples-typescript/blob/main/message-passing/safe-message-handlers/README.md) demonstrating safe `async` message handling. ::: ### Use async handlers {#async-handlers} Signal and Update handlers can be `async` functions. Using `async` allows you to use `await` with Activities, Child Workflows, durable [`workflow.sleep`](https://typescript.temporal.io/api/namespaces/workflow#sleep) Timers, [`workflow.condition`](https://typescript.temporal.io/api/namespaces/workflow#condition) conditions, etc. This expands the possibilities for what can be done by a handler but it also means that handler executions and your main Workflow method are all running concurrently, with switching occurring between them at `await` calls. It's essential to understand the things that could go wrong in order to use `async` handlers safely. See [Workflow message passing](/encyclopedia/workflow-message-passing) for guidance on safe usage of async Signal and Update handlers, the [Safe message handlers](https://github.com/temporalio/samples-typescript/blob/main/message-passing/safe-message-handlers/README.md) sample, and the [Controlling handler concurrency](#control-handler-concurrency) and [Waiting for message handlers to finish](#wait-for-message-handlers) sections below. The following code executes an Activity that makes a network call to a remote service. It modifies the Update handler from earlier on this page, turning it into an `async` function: ```typescript // 👉 Use the objects returned by defineUpdate to set the Update handler in // Workflow code, and to send Updates from Client code. export const setLanguageUsingActivity = wf.defineUpdate('setLanguageUsingActivity'); export async function greetingWorkflow(): Promise { const greetings: Partial> = { [Language.CHINESE]: '你好,世界', [Language.ENGLISH]: 'Hello, world', }; let language = Language.ENGLISH; const lock = new Mutex(); wf.setHandler(setLanguageUsingActivity, async (newLanguage) => { // 👉 An Update handler can mutate the Workflow state and return a value. // 👉 Since this update handler is async, it can execute an activity. if (!(newLanguage in greetings)) { // 👉 Do the following with the lock held to ensure that multiple calls to set_language are processed in order. await lock.runExclusive(async () => { if (!(newLanguage in greetings)) { const greeting = await callGreetingService(newLanguage); if (!greeting) { // 👉 An update validator cannot be async, so cannot be used to check that the remote // call_greeting_service supports the requested language. Raising ApplicationError // will fail the Update, but the WorkflowExecutionUpdateAccepted event will still be // added to history. throw new wf.ApplicationFailure(`${newLanguage} is not supported by the greeting service`); } greetings[newLanguage] = greeting; } }); } const previousLanguage = language; language = newLanguage; return previousLanguage; }); ... } ``` After updating the code to use `async`, your Update handler can schedule an Activity and await the result. Although an `async` Signal handler can also execute an Activity, using an Update handler allows the client to receive a result or error once the Activity completes. This lets your client track the progress of asynchronous work performed by the Update's Activities, Child Workflows, etc. ### Add wait conditions to block Sometimes, `async` Signal or Update handlers need to meet certain conditions before they should continue. You can use [`workflow.condition`](https://typescript.temporal.io/api/namespaces/workflow#condition) to prevent the code from proceeding until a condition is true. You specify the condition by passing a function that returns `true` or `false`. This is an important feature that helps you control your handler logic. Here are three important use cases for `workflow.condition`: - Waiting for a Signal or Update to arrive - Waiting in a handler until it is appropriate to continue. - Waiting in the main Workflow until all active handlers have finished. #### Wait for a Signal or Update to arrive It's common to use `workflow.condition` to wait for a particular Signal or Update to be sent by a Client: ```typescript export async function greetingWorkflow(): Promise { let approvedForRelease = false; let approverName: string | undefined; wf.setHandler(approve, (input) => { approvedForRelease = true; approverName = input.name; }); ... await wf.condition(() => approvedForRelease); ... } ``` #### Use wait conditions in handlers It's common to use a Workflow wait condition in a handler. For example, suppose your Workflow has a mutable variable `readyForUpdateToExecute` that indicates whether your Update handler should be allowed to start executing. You can use `workflow.condition` in the handler to make the handler pause until the condition is met: ```typescript let readyForUpdateToExecute = false; wf.setHandler(myUpdate, async (input: MyUpdateInput): Promise => { await wf.condition(() => readyForUpdateToExecute); ... }); ``` Remember: handlers can execute before the main Workflow method starts. You can also use wait conditions anywhere else in the handler to wait for a specific condition to become true. This allows you to write handlers that pause at multiple points, each time waiting for a required condition to become true. #### Ensure your handlers finish before the Workflow completes {#wait-for-message-handlers} Workflow wait conditions can ensure your handler completes before a Workflow finishes. When your Workflow uses `async` Signal or Update handlers, your main Workflow method can return or Continue-as-New while a handler is still waiting on an async task, such as an Activity result. The Workflow completing may interrupt the handler before it finishes crucial work and cause client errors when trying retrieve Update results. Use [`workflow.condition`](https://typescript.temporal.io/api/namespaces/workflow#condition) and [`allHandlersFinished`](https://typescript.temporal.io/api/namespaces/workflow#condition#allhandlersfinished) to address this problem and allow your Workflow to end smoothly: ```typescript export async function myWorkflow(): Promise { await wf.condition(wf.allHandlersFinished); return workflowOutput; } ``` By default, your Worker will log a warning when you allow a Workflow Execution to finish with unfinished handler executions. You can silence these warnings on a per-handler basis by setting the `unfinishedPolicy` in `SignalHandlerOptions` or `UpdateHandlerOptions` when calling [`workflow.setHandler`](https://typescript.temporal.io/api/namespaces/workflow#sethandler) See [Finishing handlers before the Workflow completes](/handling-messages#finishing-message-handlers) for more information. ### Use a lock to prevent concurrent handler execution {#control-handler-concurrency} Concurrent processes can interact in unpredictable ways. Incorrectly written [concurrent message-passing](/handling-messages#message-handler-concurrency) code may not work correctly when multiple handler instances run simultaneously. Here's an example of a pathological case: ```typescript export async function myWorkflow(): Promise { let x = 0; let y = 0; wf.setHandler(mySignal, async () => { const data = await myActivity(); x = data.x; // 🐛🐛 Bug!! If multiple instances of this handler are executing // concurrently, then there may be times when the Workflow has x from one // Activity execution and y from another. await wf.sleep(500); // or await anything else y = data.y; }); ... } ``` Coordinating access using a lock (also known as a mutex) corrects this code. Locking makes sure that only one handler instance can execute a specific section of code at any given time: ```typescript ... export async function myWorkflow(): Promise { let x = 0; let y = 0; const lock = new Mutex(); wf.setHandler(mySignal, async () => { await lock.runExclusive(async () => { const data = await myActivity(); x = data.x; // ✅ OK: node's event loop may switch now to a different handler // execution, or to the main workflow function, but no other execution of // this handler can run until this execution finishes. await wf.sleep(500); // or await anything else y = data.y; }); }); return { name: 'hello', }; } ``` ## Message handler troubleshooting {#message-handler-troubleshooting} When sending a Signal, Update, or Query to a Workflow, your Client might encounter the following errors: - **The client can't contact the server**: You'll receive a [`client.ServiceError`](https://typescript.temporal.io/api/classes/client.ServiceError) on which the `cause.code` attribute is [gRPC status code](https://grpc.io/docs/guides/status-codes/) 14 `UNAVAILABLE` (after some retries). - **The workflow does not exist**: You'll receive an [`common.WorkflowNotFoundError`](https://typescript.temporal.io/api/classes/common.WorkflowNotFoundError) error. ### Problems when sending a Signal {#signal-problems} When using Signal, the two errors described above are the only errors that will result from your requests. For Queries and Updates, the client waits for a response from the Worker and therefore additional errors may occur during the handler Execution by the Worker. ### Problems when sending an Update {#update-problems} When working with Updates, you may encounter these problems: - **No Workflow Workers are polling the Task Queue**: Your request will be retried by the SDK Client indefinitely. - **Update failed**: You'll receive a [`client.WorkflowUpdateFailedError`](https://typescript.temporal.io/api/classes/client.WorkflowUpdateFailedError) exception. There are two ways this can happen: - The Update was rejected by an Update validator defined in the Workflow alongside the Update handler. - The Update failed after having been accepted. Update failures are like [Workflow failures](/references/failures#errors-in-workflows). Issues that cause a Workflow failure in the main method also cause Update failures in the Update handler. These might include: - A failed Child Workflow - A failed Activity (if the Activity retries have been set to a finite number) - The Workflow author raising `ApplicationFailure` - **The handler caused the Workflow Task to fail**: A [Workflow Task Failure](/references/failures#errors-in-workflows) causes the server to retry Workflow Tasks indefinitely. What happens to your Update request depends on its stage: - If the request hasn't been accepted by the server, you receive a [`client.ServiceError`](https://typescript.temporal.io/api/classes/client.ServiceError) on which the `cause.code` attribute is [gRPC status code](https://grpc.io/docs/guides/status-codes/) 9 `FAILED_PRECONDITION` (after some retries). - If the request has been accepted, it is durable. Once the Workflow is healthy again after a code deploy, use an [`WorkflowUpdateHandle`](https://typescript.temporal.io/api/interfaces/client.WorkflowUpdateHandle) to fetch the Update result. - **The Workflow finished while the Update handler execution was in progress**: You'll receive a [`client.ServiceError`](https://typescript.temporal.io/api/classes/client.ServiceError) on which the `cause.code` attribute is [gRPC status code](https://grpc.io/docs/guides/status-codes/) 5 `NOT_FOUND`. This happens if the Workflow finished while the Update handler execution was in progress, for example because - The Workflow was canceled or failed. - The Workflow completed normally or continued-as-new and the Workflow author did not [wait for handlers to be finished](/handling-messages#finishing-message-handlers). ### Problems when sending a Query {#query-problems} When working with Queries, you may encounter these errors: - **There is no Workflow Worker polling the Task Queue**: You'll receive a [`client.ServiceError`](https://typescript.temporal.io/api/classes/client.ServiceError) on which the `cause.code` attribute is [gRPC status code](https://grpc.io/docs/guides/status-codes/) 9 `FAILED_PRECONDITION`. - **Query failed**: You'll receive a [`client.QueryNotRegisteredError`](https://typescript.temporal.io/api/classes/client.QueryNotRegisteredError) exception if something goes wrong during a Query. Any error in a Query handler will trigger this error. This differs from Signal and Update requests, where errors can lead to Workflow Task Failure instead. - **The handler caused the Workflow Task to fail.** This would happen, for example, if the Query handler blocks the thread for too long without yielding. ## Define Signals and Queries statically or dynamically {#dynamic-handler} - Handlers for both Signals and Queries can take arguments, which can be used inside `setHandler` logic. - Only Signal Handlers can mutate state, and only Query Handlers can return values. * [Define Signals and Queries statically](#static-signals-and-queries) * [Define Signals and Queries dynamically](#dynamic-signals-and-queries) ### Define Signals and Queries statically {#static-signals-and-queries} If you know the name of your Signals and Queries upfront, we recommend declaring them outside the Workflow Definition. [signals-queries/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/signals-queries/src/workflows.ts) ```ts export const unblockSignal = wf.defineSignal('unblock'); export const isBlockedQuery = wf.defineQuery('isBlocked'); export async function unblockOrCancel(): Promise { let isBlocked = true; wf.setHandler(unblockSignal, () => void (isBlocked = false)); wf.setHandler(isBlockedQuery, () => isBlocked); wf.log.info('Blocked'); try { await wf.condition(() => !isBlocked); wf.log.info('Unblocked'); } catch (err) { if (err instanceof wf.CancelledFailure) { wf.log.info('Cancelled'); } throw err; } } ``` This technique helps provide type safety because you can export the type signature of the Signal or Query to be called by the Client. ### Define Signals and Queries dynamically {#dynamic-signals-and-queries} For more flexible use cases, you might want a dynamic Signal (such as a generated ID). You can handle it in two ways: - Avoid making it dynamic by collapsing all Signals into one handler and move the ID to the payload. - Actually make the Signal name dynamic by inlining the Signal definition per handler. ```ts // "fat handler" solution wf.setHandler(`genericSignal`, (payload) => { switch (payload.taskId) { case taskAId: // do task A things break; case taskBId: // do task B things break; default: throw new Error('Unexpected task.'); } }); // "inline definition" solution wf.setHandler(wf.defineSignal(`task-${taskAId}`), (payload) => { /* do task A things */ }); wf.setHandler(wf.defineSignal(`task-${taskBId}`), (payload) => { /* do task B things */ }); // utility "inline definition" helper const inlineSignal = (signalName, handler) => wf.setHandler(wf.defineSignal(signalName), handler); inlineSignal(`task-${taskBId}`, (payload) => { /* do task B things */ }); ```
API Design FAQs **Why not "new Signal" and "new Query"?** The semantic of `defineSignal` and `defineQuery` is intentional. They return Signal and Query **definitions**, not unique instances of Signals and Queries themselves The following is their [entire source code](https://github.com/temporalio/sdk-typescript/blob/fc658d3760e6653aec47732ab17a0062b7dd23fc/packages/workflow/src/workflow.ts#L883-L907): ```ts /** * Define a signal method for a Workflow. */ export function defineSignal( name: string, ): SignalDefinition { return { type: 'signal', name, }; } /** * Define a query method for a Workflow. */ export function defineQuery( name: string, ): QueryDefinition { return { type: 'query', name, }; } ``` Signals and Queries are instantiated only in `setHandler` and are specific to particular Workflow Executions. These distinctions might seem minor, but they model how Temporal works under the hood, because Signals and Queries are messages identified by "just strings" and don't have meaning independent of the Workflow having a listener to handle them. This will be clearer if you refer to the Client-side APIs. **Why setHandler and not OTHER_API?** We named it `setHandler` instead of `subscribe` because a Signal or Query can have only one "handler" at a time, whereas `subscribe` could imply an Observable with multiple consumers and is a higher-level construct. ```ts wf.setHandler(MySignal, handlerFn1); wf.setHandler(MySignal, handlerFn2); // replaces handlerFn1 ``` If you are familiar with [RxJS](https://rxjs.dev/), you are free to wrap your Signals and Queries into Observables if you want, or you could dynamically reassign the listener based on your business logic or Workflow state.
--- ## Manage Namespaces - TypeScript SDK ## How to create and manage Namespaces {#namespaces} You can create, update, deprecate or delete your [Namespaces](/namespaces) using either the Temporal CLI or SDK APIs. Use Namespaces to isolate your Workflow Executions according to your needs. For example, you can use Namespaces to match the development lifecycle by having separate `dev` and `prod` Namespaces. You could also use them to ensure Workflow Executions between different teams never communicate - such as ensuring that the `teamA` Namespace never impacts the `teamB` Namespace. On Temporal Cloud, use the [Temporal Cloud UI](/cloud/namespaces#create-a-namespace) to create and manage a Namespace from the UI, or [tcld commands](https://docs.temporal.io/cloud/tcld/namespace/) to manage Namespaces from the command-line interface. On self-hosted Temporal Service, you can register and manage your Namespaces using the Temporal CLI (recommended) or programmatically using APIs. Note that these APIs and Temporal CLI commands will not work with Temporal Cloud. Use a custom [Authorizer](/self-hosted-guide/security#authorizer-plugin) on your Frontend Service in the Temporal Service to set restrictions on who can create, update, or deprecate Namespaces. You must register a Namespace with the Temporal Service before setting it in the Temporal Client. ### How to register Namespaces {#register-namespace} Registering a Namespace creates a Namespace on the Temporal Service or Temporal Cloud. On Temporal Cloud, use the [Temporal Cloud UI](/cloud/namespaces#create-a-namespace) or [tcld commands](https://docs.temporal.io/cloud/tcld/namespace/) to create Namespaces. On self-hosted Temporal Service, you can register your Namespaces using the Temporal CLI (recommended) or programmatically using APIs. Note that these APIs and Temporal CLI commands will not work with Temporal Cloud. Use a custom [Authorizer](/self-hosted-guide/security#authorizer-plugin) on your Frontend Service in the Temporal Service to set restrictions on who can create, update, or deprecate Namespaces. ### How to manage Namespaces {#manage-namespaces} You can get details for your Namespaces, update Namespace configuration, and deprecate or delete your Namespaces. On Temporal Cloud, use the [Temporal Cloud UI](/cloud/namespaces#create-a-namespace) or [tcld commands](https://docs.temporal.io/cloud/tcld/namespace/) to manage Namespaces. On self-hosted Temporal Service, you can manage your registered Namespaces using the Temporal CLI (recommended) or programmatically using APIs. Note that these APIs and Temporal CLI commands will not work with Temporal Cloud. Use a custom [Authorizer](/self-hosted-guide/security#authorizer-plugin) on your Frontend Service in the Temporal Service to set restrictions on who can create, update, or deprecate Namespaces. You must register a Namespace with the Temporal Service before setting it in the Temporal Client. --- ## Observability - TypeScript SDK The observability section of the TypeScript developer guide covers the many ways to view the current state of your [Temporal Application](/temporal#temporal-application)—that is, ways to view which [Workflow Executions](/workflow-execution) are tracked by the [Temporal Platform](/temporal#temporal-platform) and the state of any specified Workflow Execution, either currently or at points of an execution. This section covers features related to viewing the state of the application, including: - [Emit metrics](#metrics) - [Set up tracing](#tracing) - [Log from a Workflow](#logging) - [Visibility APIs](#visibility) ## Emit metrics {#metrics} Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. For a complete list of metrics capable of being emitted, see the [SDK metrics reference](/references/sdk-metrics). - For an overview of Prometheus and Grafana integration, refer to the [Monitoring](/self-hosted-guide/monitoring) guide. - For a list of metrics, see the [SDK metrics reference](/references/sdk-metrics). - For an end-to-end example that exposes metrics with the TypeScript SDK, refer to the [samples-typescript](https://github.com/temporalio/samples-typescript/tree/main/interceptors-opentelemetry) repo. Workers can emit metrics and traces. There are a few [telemetry options](https://typescript.temporal.io/api/interfaces/worker.TelemetryOptions) that can be provided to [`Runtime.install`](https://typescript.temporal.io/api/classes/worker.Runtime/#install). The common options are: - `metrics: { otel: { url } }`: The URL of a gRPC [OpenTelemetry collector](https://opentelemetry.io/docs/collector/). - `metrics: { prometheus: { bindAddress } }`: Address on the Worker host that will have metrics for [Prometheus](https://prometheus.io/) to scrape. To set up tracing of Workflows and Activities, use our `opentelemetry-interceptors` package. (For details, see the next section.) ```typescript telemetryOptions: { metrics: { prometheus: { bindAddress: '0.0.0.0:9464' }, }, logging: { forward: { level: 'DEBUG' } }, }, ``` ## Set up tracing {#tracing} Tracing allows you to view the call graph of a Workflow along with its Activities and any Child Workflows. Temporal Web's tracing capabilities mainly track Activity Execution within a Temporal context. If you need custom tracing specific for your use case, you should make use of context propagation to add tracing logic accordingly. The [`interceptors-opentelemetry`](https://github.com/temporalio/samples-typescript/tree/main/interceptors-opentelemetry) sample shows how to use the SDK's built-in OpenTelemetry tracing to trace everything from starting a Workflow to Workflow Execution to running an Activity from that Workflow. The built-in tracing uses protobuf message headers (like [this one](https://github.com/temporalio/api/blob/b2b8ae6592a8730dd5be6d90569d1aea84e1712f/temporal/api/workflowservice/v1/request_response.proto#L161) when starting a Workflow) to propagate the tracing information from the client to the Workflow and from the Workflow to its successors (when Continued As New), children, and Activities. All of these executions are linked with a single trace identifier and have the proper `parent -> child` span relation. Tracing is compatible between different Temporal SDKs as long as compatible [context propagators](https://opentelemetry.io/docs/concepts/context-propagation/) are used. **Context propagation** The TypeScript SDK uses the global OpenTelemetry propagator. To extend the default ([Trace Context](https://github.com/open-telemetry/opentelemetry-js/blob/main/packages/opentelemetry-core/README.md#w3ctracecontextpropagator-propagator) and [Baggage](https://github.com/open-telemetry/opentelemetry-js/blob/main/packages/opentelemetry-core/README.md#baggage-propagator) propagators) to also include the [Jaeger propagator](https://www.npmjs.com/package/@opentelemetry/propagator-jaeger), follow these steps: - `npm i @opentelemetry/propagator-jaeger` - At the top level of your Workflow code, add the following lines: ```js CompositePropagator, W3CBaggagePropagator, W3CTraceContextPropagator, } from '@opentelemetry/core'; propagation.setGlobalPropagator( new CompositePropagator({ propagators: [ new W3CTraceContextPropagator(), new W3CBaggagePropagator(), new JaegerPropagator(), ], }), ); ``` Similarly, you can customize the OpenTelemetry `NodeSDK` propagators by following the instructions in the [Initialize the SDK](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-sdk-node#initialize-the-sdk) section of the `README.md` file. ## Log from a Workflow {#logging} Logging enables you to record critical information during code execution. Loggers create an audit trail and capture information about your Workflow's operation. An appropriate logging level depends on your specific needs. During development or troubleshooting, you might use debug or even trace. In production, you might use info or warn to avoid excessive log volume. The logger supports the following logging levels: | Level | Use | | ------- | --------------------------------------------------------------------------------------------------------- | | `TRACE` | The most detailed level of logging, used for very fine-grained information. | | `DEBUG` | Detailed information, typically useful for debugging purposes. | | `INFO` | General information about the application's operation. | | `WARN` | Indicates potentially harmful situations or minor issues that don't prevent the application from working. | | `ERROR` | Indicates error conditions that might still allow the application to continue running. | The Temporal SDK core normally uses `WARN` as its default logging level. ### Logging from Activities Activities run in the standard Node.js environment and may therefore use any Node.js logger directly. The Temporal SDK however provides a convenient Activity Context logger, which funnels log messages to the [Runtime's logger](/develop/typescript/observability#customizing-the-default-logger). Attributes from the current Activity context are automatically included as metadata on every log entries emitted using the Activity context logger, and some key events of the Activity's lifecycle are automatically logged (at DEBUG level for most messages; WARN for failures).
Using the Activity Context logger ```ts export async function greet(name: string): Promise { log.info('Log from activity', { name }); return `Hello, ${name}!`; } ```
{/* #### Customizing Activity logging with `ActivityOutboundCallsInterceptor` FIXME(JWH): Quick introduction to `ActivityOutboundCallsInterceptor.getLogAttributes()`. */} ### Logging from Workflows Workflows may not use regular Node.js loggers because: 1. Workflows run in a sandboxed environment and cannot do any I/O. 1. Workflow code might get replayed at any time, which would result in duplicated log messages. The Temporal SDK however provides a Workflow Context logger, which funnels log messages to the [Runtime's logger](/develop/typescript/observability#customizing-the-default-logger). Attributes from the current Workflow context are automatically included as metadata on every log entries emitted using the Workflow context logger, and some key events of the Workflow's lifecycle are automatically logged (at DEBUG level for most messages; WARN for failures).
Using the Workflow Context logger ```ts export async function myWorkflow(name: string): Promise { log.info('Log from workflow', { name }); return `Hello, ${name}!`; } ```
The Workflow Context Logger tries to avoid reemitting log messages on Workflow Replays. {/* #### Customizing Workflow logging using `WorkflowOutboundCallsInterceptor` FIXME(JWH): Quick introduction to `WorkflowOutboundCallsInterceptor.getLogAttributes()`. */} #### Limitations of Workflow logs Internally, Workflow logging uses Sinks, and is consequently subject to the same limitations as Sinks. Notably, logged objects must be serializable using the V8 serialization. {/* FIXME(JWH): Add more details and link to actual Sinks documentation */} ### What is the Runtime's Logger A Temporal Worker may emit logs in various ways, including: - Messages emitted using the [Workflow Context Logger](#logging); - Messages emitted using the [Activity Context Logger](#logging-from-activities); - Messages emitted by the TypeScript SDK Worker itself; - Messages emitted by the underlying Temporal Core SDK (native code). All of these messages are internally routed to a single logger object, called the Runtime's Logger. By default, the Runtime's Logger simply write messages to the console (i.e. the process's `STDOUT`). #### How to customize the Runtime's Logger A custom Runtime Logger may be registered when the SDK `Runtime` is instantiated. This is done only once per process. To register a custom Runtime Logger, you must explicitly instantiate the Runtime, using the [`Runtime.install()`](https://typescript.temporal.io/api/classes/worker.Runtime/#install) function. For example: ```typescript DefaultLogger, makeTelemetryFilterString, Runtime, } from '@temporalio/worker'; // This is your custom Logger. const logger = new DefaultLogger('WARN', ({ level, message }) => { console.log(`Custom logger: ${level} — ${message}`); }); Runtime.install({ logger, // The following block is optional, but generally desired. // It allows capturing log messages emitted by the underlying Temporal Core SDK (native code). // The Telemetry Filter String determine the desired verboseness of messages emitted by the // Temporal Core SDK itself ("core"), and by other native libraries ("other"). telemetryOptions: { logging: { filter: makeTelemetryFilterString({ core: 'INFO', other: 'INFO' }), forward: {}, }, }, }); ``` A common use case for this is to write log messages to a file to be picked up by a collector service, such as the [Datadog Agent](https://docs.datadoghq.com/logs/log_collection/nodejs/?tab=winston30). For example: ```typescript DefaultLogger, makeTelemetryFilterString, Runtime, } from '@temporalio/worker'; const logger = winston.createLogger({ level: 'info', format: winston.format.json(), transports: [new transports.File({ filename: '/path/to/worker.log' })], }); Runtime.install({ logger, // The following block is optional, but generally desired. // It allows capturing log messages emitted by the underlying Temporal Core SDK (native code). // The Telemetry Filter String determine the desired verboseness of messages emitted by the // Temporal Core SDK itself ("core"), and by other native libraries ("other"). telemetryOptions: { logging: { filter: makeTelemetryFilterString({ core: 'INFO', other: 'INFO' }), forward: {}, }, }, }); ``` {/* FIXME(JWH): Everything below this point must be revisited and moved to a distinct section (Sinks). */} ### Implementing custom Logging-like features based on Workflow Sinks Sinks enable one-way export of logs, metrics, and traces from the Workflow isolate to the Node.js environment. {/* Workflows in Temporal may be replayed from the beginning of their history when resumed. In order for Temporal to recreate the exact state Workflow code was in, the code is required to be fully deterministic. To prevent breaking determinism, in the TypeScript SDK, Workflow code runs in an isolated execution environment and may not use any of the Node.js APIs or communicate directly with the outside world. */} Sinks are written as objects with methods. Similar to Activities, they are declared in the Worker and then proxied in Workflow code, and it helps to share types between both. #### Comparing Sinks and Activities Sinks are similar to Activities in that they are both registered on the Worker and proxied into the Workflow. However, they differ from Activities in important ways: - A sink function doesn't return any value back to the Workflow and cannot be awaited. - A sink call isn't recorded in the Event History of a Workflow Execution (no timeouts or retries). - A sink function _always_ runs on the same Worker that runs the Workflow Execution it's called from. #### Declare the sink interface Explicitly declaring a sink's interface is optional but is useful for ensuring type safety in subsequent steps: [sinks/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/sinks/src/workflows.ts) ```ts export interface AlertSinks extends Sinks { alerter: { alert(message: string): void; }; } export type MySinks = AlertSinks; ``` #### Implement sinks Implementing sinks is a two-step process. Implement and inject the Sink function into a Worker [sinks/src/worker.ts](https://github.com/temporalio/samples-typescript/blob/main/sinks/src/worker.ts) ```ts async function main() { const sinks: InjectedSinks = { alerter: { alert: { fn(workflowInfo, message) { console.log('sending SMS alert!', { workflowId: workflowInfo.workflowId, workflowRunId: workflowInfo.runId, message, }); }, callDuringReplay: false, // The default }, }, }; const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), taskQueue: 'sinks', sinks, }); await worker.run(); console.log('Worker gracefully shutdown'); } main().catch((err) => { console.error(err); process.exit(1); }); ``` - Sink function implementations are passed as an object into [WorkerOptions](https://typescript.temporal.io/api/interfaces/worker.WorkerOptions/#sinks). - You can specify whether you want the injected function to be called during Workflow replay by setting the `callDuringReplay` option. #### Proxy and call a sink function from a Workflow [sinks/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/sinks/src/workflows.ts) ```ts const { alerter } = proxySinks(); export async function sinkWorkflow(): Promise { log.info('Workflow Execution started'); alerter.alert('alerter: Workflow Execution started'); return 'Hello, Temporal!'; } ``` Some important features of the [InjectedSinkFunction](https://typescript.temporal.io/api/interfaces/worker.InjectedSinkFunction) interface: - **Injected WorkflowInfo argument:** The first argument of a Sink function implementation is a [`workflowInfo` object](https://typescript.temporal.io/api/interfaces/workflow.WorkflowInfo/) that contains useful metadata. - **Limited arguments types:** The remaining Sink function arguments are copied between the sandbox and the Node.js environment using the [structured clone algorithm](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm). - **No return value:** To prevent breaking determinism, Sink functions cannot return values to the Workflow. **Advanced: Performance considerations and non-blocking Sinks** The injected sink function contributes to the overall Workflow Task processing duration. - If you have a long-running sink function, such as one that tries to communicate with external services, you might start seeing Workflow Task timeouts. - The effect is multiplied when using `callDuringReplay: true` and replaying long Workflow histories because the Workflow Task timer starts when the first history page is delivered to the Worker. ### How to provide a custom logger {#custom-logger} Use a custom logger for logging. #### Logging in Workers and Clients The Worker comes with a default logger, which defaults to log any messages with level `INFO` and higher to `STDERR` using `console.error`. The following [log levels](https://typescript.temporal.io/api/namespaces/worker#loglevel) are listed in increasing order of severity. #### Customizing the default logger Temporal uses a [`DefaultLogger`](https://typescript.temporal.io/api/classes/worker.DefaultLogger/) that implements the basic interface: ```ts const logger = new DefaultLogger('WARN', ({ level, message }) => { console.log(`Custom logger: ${level} — ${message}`); }); Runtime.install({ logger }); ``` The previous code example sets the default logger to log only messages with level `WARN` and higher. #### Accumulate logs for testing and reporting ```ts const logs: LogEntry[] = []; const logger = new DefaultLogger(LogLevel.TRACE, (entry) => logs.push(entry)); logger.debug('hey', { a: 1 }); logger.info('ho'); logger.warn('lets', { a: 1 }); logger.error('go'); ``` A common logging use case is logging to a file to be picked up by a collector like the [Datadog Agent](https://docs.datadoghq.com/logs/log_collection/nodejs/?tab=winston30). ```ts const logger = winston.createLogger({ level: 'info', format: winston.format.json(), transports: [new transports.File({ filename: '/path/to/worker.log' })], }); Runtime.install({ logger }); ``` ## Visibility APIs {#visibility} The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Temporal Service. ### How to use Search Attributes {#search-attributes} The typical method of retrieving a Workflow Execution is by its Workflow Id. However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments. You can do this with [Search Attributes](/search-attribute). - [Default Search Attributes](/search-attribute#default-search-attribute) like `WorkflowType`, `StartTime` and `ExecutionStatus` are automatically added to Workflow Executions. - _Custom Search Attributes_ can contain their own domain-specific data (like `customerId` or `numItems`). - A few [generic Custom Search Attributes](/search-attribute#custom-search-attribute) like `CustomKeywordField` and `CustomIntField` are created by default in Temporal's [Docker Compose](https://github.com/temporalio/docker-compose). The steps to using custom Search Attributes are: - Create a new Search Attribute in your Temporal Service using `temporal operator search-attribute create` or the Cloud UI. - Set the value of the Search Attribute for a Workflow Execution: - On the Client by including it as an option when starting the Execution. - In the Workflow by calling `UpsertSearchAttributes`. - Read the value of the Search Attribute: - On the Client by calling `DescribeWorkflow`. - In the Workflow by looking at `WorkflowInfo`. - Query Workflow Executions by the Search Attribute using a [List Filter](/list-filter): - [With the Temporal CLI](/cli/workflow#list). - In code by calling `ListWorkflowExecutions`. Here is how to query Workflow Executions: Use [`WorkflowService.listWorkflowExecutions`](https://typescript.temporal.io/api/classes/proto.temporal.api.workflowservice.v1.WorkflowService-1#listworkflowexecutions): ```typescript const connection = await Connection.connect(); const response = await connection.workflowService.listWorkflowExecutions({ query: `ExecutionStatus = "Running"`, }); ``` where `query` is a [List Filter](/list-filter). ### How to set custom Search Attributes {#custom-search-attributes} After you've created custom Search Attributes in your Temporal Service (using `temporal operator search-attribute create` or the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow. Use [`WorkflowOptions.searchAttributes`](https://typescript.temporal.io/api/interfaces/client.WorkflowOptions#searchattributes). [search-attributes/src/client.ts](https://github.com/temporalio/samples-typescript/blob/main/search-attributes/src/client.ts) ```ts const handle = await client.workflow.start(example, { taskQueue: 'search-attributes', workflowId: 'search-attributes-example-0', searchAttributes: { CustomIntField: [2], CustomKeywordListField: ['keywordA', 'keywordB'], CustomBoolField: [true], CustomDatetimeField: [new Date()], CustomTextField: [ 'String field is for text. When queried, it will be tokenized for partial match. StringTypeField cannot be used in Order By', ], }, }); const { searchAttributes } = await handle.describe(); ``` The type of `searchAttributes` is `Record`. ### How to upsert Search Attributes {#upsert-search-attributes} You can upsert Search Attributes to add or update Search Attributes from within Workflow code. Inside a Workflow, we can read from [`WorkflowInfo.searchAttributes`](https://typescript.temporal.io/api/interfaces/workflow.WorkflowInfo#searchattributes) and call [`upsertSearchAttributes`](https://typescript.temporal.io/api/namespaces/workflow#upsertsearchattributes): [search-attributes/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/search-attributes/src/workflows.ts) ```ts export async function example(): Promise { const customInt = (workflowInfo().searchAttributes.CustomIntField?.[0] as number) || 0; upsertSearchAttributes({ // overwrite the existing CustomIntField: [2] CustomIntField: [customInt + 1], // delete the existing CustomBoolField: [true] CustomBoolField: [], // add a new value CustomDoubleField: [3.14], }); return workflowInfo().searchAttributes; } ``` ### How to remove a Search Attribute from a Workflow {#remove-search-attribute} To remove a Search Attribute that was previously set, set it to an empty array: `[]`. ```typescript async function yourWorkflow() { upsertSearchAttributes({ CustomIntField: [1, 2, 3] }); // ... later, to remove: upsertSearchAttributes({ CustomIntField: [] }); } ``` --- ## Schedules - TypeScript SDK The pages shows how to do the following: - [Schedule a Workflow](#schedule-a-workflow) - [Create a Scheduled Workflow](#create) - [Backfill a Scheduled Workflow](#backfill) - [Delete a Scheduled Workflow](#delete) - [Describe a Scheduled Workflow](#describe) - [List a Scheduled Workflow](#list) - [Pause a Scheduled Workflow](#pause) - [Trigger a Scheduled Workflow](#trigger) - [Update a Scheduled Workflow](#update) - [Temporal Cron Jobs](#temporal-cron-jobs) - [Start Delay](#start-delay) ## How to Schedule a Workflow {#schedule-a-workflow} Scheduling Workflows is a crucial aspect of any automation process, especially when dealing with time-sensitive tasks. By scheduling a Workflow, you can automate repetitive tasks, reduce the need for manual intervention, and ensure timely execution of your business processes Use any of the following action to help Schedule a Workflow Execution and take control over your automation process. ### How to Create a Scheduled Workflow {#create} The create action enables you to create a new Schedule. When you create a new Schedule, a unique Schedule ID is generated, which you can use to reference the Schedule in other Schedule commands. :::tip Schedule Auto-Deletion Once a Schedule has completed creating all its Workflow Executions, the Temporal Service deletes it since it won’t fire again. The Temporal Service doesn't guarantee when this removal will happen. ::: [schedules/src/start-schedule.ts](https://github.com/temporalio/samples-typescript/blob/main/schedules/src/start-schedule.ts) ```ts async function run() { const config = loadClientConnectConfig(); const connection = await Connection.connect(config.connectionOptions); const client = new Client({ connection }); // https://typescript.temporal.io/api/classes/client.ScheduleClient#create const schedule = await client.schedule.create({ action: { type: 'startWorkflow', workflowType: reminder, args: ['♻️ Dear future self, please take out the recycling tonight. Sincerely, past you ❤️'], taskQueue: 'schedules', }, scheduleId: 'sample-schedule', policies: { catchupWindow: '1 day', overlap: ScheduleOverlapPolicy.ALLOW_ALL, }, spec: { intervals: [{ every: '10s' }], // or periodic calendar times: // calendars: [ // { // comment: 'every wednesday at 8:30pm', // dayOfWeek: 'WEDNESDAY', // hour: 20, // minute: 30, // }, // ], // or a single datetime: // calendars: [ // { // comment: '1/1/23 at 9am', // year: 2023, // month: 1, // dayOfMonth: 1, // hour: 9, // }, // ], }, }); ``` ### How to Backfill a Scheduled Workflow {#backfill} The backfill action executes Actions ahead of their specified time range. This command is useful when you need to execute a missed or delayed Action, or when you want to test the Workflow before its scheduled time. [schedules/src/backfill-schedule.ts](https://github.com/temporalio/samples-typescript/blob/main/schedules/src/backfill-schedule.ts) ```ts function subtractMinutes(minutes: number): Date { const now = new Date(); return new Date(now.getTime() - minutes * 60 * 1000); } async function run() { const client = new Client({ connection: await Connection.connect(), }); const backfillOptions: Backfill = { start: subtractMinutes(10), end: subtractMinutes(9), overlap: ScheduleOverlapPolicy.ALLOW_ALL, }; const handle = client.schedule.getHandle('sample-schedule'); await handle.backfill(backfillOptions); console.log(`Schedule is now backfilled.`); } ``` ### How to Delete a Scheduled Workflow {#delete} The delete action enables you to delete a Schedule. When you delete a Schedule, it does not affect any Workflows that were started by the Schedule. [schedules/src/delete-schedule.ts](https://github.com/temporalio/samples-typescript/blob/main/schedules/src/delete-schedule.ts) ```ts async function run() { const client = new Client({ connection: await Connection.connect(), }); const handle = client.schedule.getHandle('sample-schedule'); await handle.delete(); console.log(`Schedule is now deleted.`); } ``` ### How to Describe a Scheduled Workflow {#describe} The describe action shows the current Schedule configuration, including information about past, current, and future Workflow Runs. This command is helpful when you want to get a detailed view of the Schedule and its associated Workflow Runs. [schedules/src/describe-schedule.ts](https://github.com/temporalio/samples-typescript/blob/main/schedules/src/describe-schedule.ts) ```ts async function run() { const client = new Client({ connection: await Connection.connect(), }); const handle = client.schedule.getHandle('sample-schedule'); const result = await handle.describe(); console.log(`Schedule description: ${JSON.stringify(result)}`); } ``` ### How to List a Scheduled Workflow {#list} The list action lists all the available Schedules. This command is useful when you want to view a list of all the Schedules and their respective Schedule IDs. [schedules/src/list-schedule.ts](https://github.com/temporalio/samples-typescript/blob/main/schedules/src/list-schedule.ts) ```ts async function run() { const client = new Client({ connection: await Connection.connect(), }); const schedules = []; const scheduleList = client.schedule.list(); for await (const schedule of scheduleList) { schedules.push(schedule); } console.log(`Schedules are now listed: ${JSON.stringify(schedules)}`); } ``` ### How to Pause a Scheduled Workflow {#pause} The pause action enables you to pause and unpause a Schedule. When you pause a Schedule, all the future Workflow Runs associated with the Schedule are temporarily stopped. This command is useful when you want to temporarily halt a Workflow due to maintenance or any other reason. [schedules/src/pause-schedule.ts](https://github.com/temporalio/samples-typescript/blob/main/schedules/src/pause-schedule.ts) ```ts async function run() { const client = new Client({ connection: await Connection.connect(), }); const handle = client.schedule.getHandle('sample-schedule'); await handle.pause(); console.log(`Schedule is now paused.`); } ``` ### How to Trigger a Scheduled Workflow {#trigger} The trigger action triggers an immediate action with a given Schedule. By default, this action is subject to the Overlap Policy of the Schedule. This command is helpful when you want to execute a Workflow outside of its scheduled time. [schedules/src/trigger-schedule.ts](https://github.com/temporalio/samples-typescript/blob/main/schedules/src/trigger-schedule.ts) ```ts async function run() { const client = new Client({ connection: await Connection.connect(), }); const handle = client.schedule.getHandle('sample-schedule'); await handle.trigger(); console.log(`Schedule is now triggered.`); } ``` ### How to Update a Scheduled Workflow {#update} The update action enables you to update an existing Schedule. This command is useful when you need to modify the Schedule's configuration, such as changing the start time, end time, or interval. [schedules/src/update-schedule.ts](https://github.com/temporalio/samples-typescript/blob/main/schedules/src/update-schedule.ts) ```ts const updateSchedule = ( input: ScheduleDescription, ): ScheduleUpdateOptions> => { const scheduleAction = input.action; scheduleAction.args = ['my updated schedule arg']; return { ...input, ...scheduleAction }; }; async function run() { const client = new Client({ connection: await Connection.connect(), }); const handle = client.schedule.getHandle('sample-schedule'); await handle.update(updateSchedule); console.log(`Schedule is now updated.`); } ``` ## How to use Temporal Cron Jobs {#temporal-cron-jobs} :::caution Cron support is not recommended We recommend using [Schedules](https://docs.temporal.io/schedule) instead of Cron Jobs. Schedules were built to provide a better developer experience, including more configuration options and the ability to update or pause running Schedules. ::: A [Temporal Cron Job](/cron-job) is the series of Workflow Executions that occur when a Cron Schedule is provided in the call to spawn a Workflow Execution. A Cron Schedule is provided as an option when the call to spawn a Workflow Execution is made. You can set each Workflow to repeat on a schedule with the `cronSchedule` option: ```typescript const handle = await client.workflow.start(scheduledWorkflow, { // ... cronSchedule: '* * * * *', // start every minute }); ``` Temporal Workflow Schedule Cron strings follow this format: ``` ┌───────────── minute (0 - 59) │ ┌───────────── hour (0 - 23) │ │ ┌───────────── day of the month (1 - 31) │ │ │ ┌───────────── month (1 - 12) │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday) │ │ │ │ │ * * * * * ``` ## Start Delay {#start-delay} **How to use Start Delay** Use the `startDelay` to schedule a Workflow Execution at a specific one-time future point rather than on a recurring schedule. You may specify the `startDelay` option on either the [`client.workflow.start()`](https://typescript.temporal.io/api/classes/client.WorkflowClient#start) or [`client.workflow.execute()`](https://typescript.temporal.io/api/classes/client.WorkflowClient#execute) methods of a Workflow Client. For example: ```typescript const handle = await client.workflow.start(someWorkflow, { // ... startDelay: '2 hours', }); ``` --- ## Set up your local with the Typescript SDK --- # Quickstart Configure your local development environment to get started developing with Temporal. The TypeScript SDK requires Node.js 18 or later. Install Node.js via your package manager by following the official Node.js instructions. }> ## Install Node.js The TypeScript SDK requires Node.js 18 or later. Install Node.js via your package manager by following the official Node.js instructions. npx @temporalio/create@latest ./my-app When prompted to select a sample, choose the hello-world sample. }> ## Install the Temporal TypeScript SDK You can create a new project with the Temporal SDK: If you're creating a new project using `npx @temporalio/create`, the required SDK packages will be installed automatically. To add Temporal to an existing project, install the required packages manually with `npm install @temporalio/client @temporalio/worker @temporalio/workflow`. Next, you'll configure a local Temporal Service for development. Install the Temporal CLI using Homebrew: brew install temporal Download the Temporal CLI archive for your architecture: Windows amd64 Windows arm64 Extract it and add temporal.exe to your PATH. Download the Temporal CLI for your architecture: Linux amd64 Linux arm64 Extract the archive and move the temporal binary into your PATH, for example: sudo mv temporal /usr/local/bin }> ## Install Temporal CLI The fastest way to get a development version of the Temporal Service running on your local machine is to use [Temporal CLI](https://docs.temporal.io/cli). Choose your operating system to install Temporal CLI. After installing, open a new Terminal window and start the development server: temporal server start-dev Change the Web UI port The Temporal Web UI may be on a different port in some examples or tutorials. To change the port for the Web UI, use the --ui-port option when starting the server: temporal server start-dev --ui-port 8080 The Temporal Web UI will now be available at http://localhost:8080. }> ## Start the development server Once you've installed Temporal CLI and added it to your PATH, open a new Terminal window and run the following command. This command starts a local Temporal Service. It starts the Web UI, creates the default Namespace, and uses an in-memory database. The Temporal Service will be available on localhost:7233. The Temporal Web UI will be available at http://localhost:8233. Leave the local Temporal Service running as you work through tutorials and other projects. You can stop the Temporal Service at any time by pressing CTRL+C. Once you have everything installed, you're ready to build apps with Temporal on your local machine. ## Run Hello World: Test Your Installation Now let's verify your setup is working by creating and running a complete Temporal application with both a Workflow and Activity. This test will confirm that: - The Temporal TypeScript SDK is properly installed - Your local Temporal Service is running - You can successfully create and execute Workflows and Activities - The communication between components is functioning correctly ### 1. Create the Activity Create an Activity file (activities.ts): ```ts export async function greet(name: string): Promise { return `Hello, ${name}!`; } ``` An Activity is a normal function or method that executes a single, well-defined action (either short or long running), which often involve interacting with the outside world, such as sending emails, making network requests, writing to a database, or calling an API, which are prone to failure. If an Activity fails, Temporal automatically retries it based on your configuration. ### 2. Create the Workflow Create a Workflow file (workflows.ts): ```ts // Only import the activity types const { greet } = proxyActivities({ startToCloseTimeout: '1 minute', }); /** A workflow that simply calls an activity */ export async function example(name: string): Promise { return await greet(name); } ``` Workflows orchestrate Activities and contain the application logic. Temporal Workflows are resilient. They can run and keep running for years, even if the underlying infrastructure fails. If the application itself crashes, Temporal will automatically recreate its pre-failure state so it can continue right where it left off. ### 3. Create and Run the Worker Create a Worker file (worker.ts): ```ts async function run() { // Step 1: Establish a connection with Temporal server. // // Worker code uses `@temporalio/worker.NativeConnection`. // (But in your application code it's `@temporalio/client.Connection`.) const connection = await NativeConnection.connect({ address: 'localhost:7233', // TLS and gRPC metadata configuration goes here. }); try { // Step 2: Register Workflows and Activities with the Worker. const worker = await Worker.create({ connection, namespace: 'default', taskQueue: 'hello-world', // Workflows are registered using a path as they run in a separate JS context. workflowsPath: require.resolve('./workflows'), activities, }); // Step 3: Start accepting tasks on the `hello-world` queue // // The worker runs until it encounters an unexpected error or the process receives a shutdown signal registered on // the SDK Runtime object. // // By default, worker logs are written via the Runtime logger to STDERR at INFO level. // // See https://typescript.temporal.io/api/classes/worker.Runtime#install to customize these defaults. await worker.run(); } finally { // Close the connection once the worker has stopped await connection.close(); } } run().catch((err) => { console.error(err); process.exit(1); }); ``` Run the Worker and keep this terminal running: ```bash npm run start ``` With your Activity and Workflow defined, you need a Worker to execute them. A Worker polls a Task Queue, that you configure it to poll, looking for work to do. Once the Worker dequeues a Workflow or Activity task from the Task Queue, it then executes that task. Workers are a crucial part of your Temporal application as they're what actually execute the tasks defined in your Workflows and Activities. For more information on Workers, see [Understanding Temporal](/evaluate/understanding-temporal#workers) and a [deep dive into Workers](/workers). ### 4. Execute the Workflow Now that your Worker is running, it's time to start a Workflow Execution. This final step will validate that everything is working correctly with your file labeled `client.ts`. Create a separate file called `client.ts`. ```ts async function run() { // Connect to the default Server location const connection = await Connection.connect({ address: 'localhost:7233' }); // In production, pass options to configure TLS and other settings: // { // address: 'foo.bar.tmprl.cloud', // tls: {} // } const client = new Client({ connection, // namespace: 'foo.bar', // connects to 'default' namespace if not specified }); const handle = await client.workflow.start(example, { taskQueue: 'hello-world', // type inference works! args: [name: string] args: ['Temporal'], // in practice, use a meaningful business ID, like customerId or transactionId workflowId: 'workflow-' + nanoid(), }); console.log(`Started workflow ${handle.workflowId}`); // optional: wait for client result console.log(await handle.result()); // Hello, Temporal! } run().catch((err) => { console.error(err); process.exit(1); }); ``` Then run: ```bash npm run workflow ``` ### Verify Success If everything is working correctly, you should see: - Worker processing the workflow and activity - Output: `Workflow result: Hello, Temporal!` - Workflow Execution details in the [Temporal Web UI](http://localhost:8233)
Additional details about Workflow Execution - Temporal clients are not explicitly closed. - To enable TLS, the `tls` option can be set to `true` or a `Temporalio::Client::Connection::TLSOptions` instance. - Calling `client.workflow.start()` and `client.workflow.execute()` send a command to Temporal Server to schedule a new Workflow Execution on the specified Task Queue. - If you started a Workflow with `client.workflow.start()`, you can choose to wait for the result anytime with handle.result(). - Using a Workflow Handle isn't necessary with `client.workflow.execute()`.
Next: Run your first Temporal Application Create a basic Workflow and run it with the Temporal TypeScript SDK --- ## Temporal Client - Typescript SDK A [Temporal Client](/encyclopedia/temporal-sdks#temporal-client) enables you to communicate with the Temporal Service. Communication with a Temporal Service lets you perform actions such as starting Workflow Executions, sending Signals and Queries to Workflow Executions, getting Workflow results, and more. You cannot initialize a Temporal Client inside a Workflow. However, they're commonly initialized inside an Activity to communicate with a Temporal Service. This page shows you how to do the following using the TypeScript SDK with the Temporal Client: - [Connect to a local development Temporal Service](#connect-to-development-service) - [Connect to Temporal Cloud](#connect-to-temporal-cloud) - [Connect to Temporal Service from a Worker](#connect-to-temporal-service-from-a-worker) - [Start a Workflow Execution](#start-workflow-execution) - [Get Workflow results](#get-workflow-results) In the TypeScript SDK, connecting to Temporal Service from a Temporal Application and from within an Activity rely on a different type of connection than connecting from a Worker. The sections [Connect to a local development Temporal Service](#connect-to-development-service) and [Connect to Temporal Cloud](#connect-to-temporal-cloud) apply to connecting from a Temporal Application or from within an Activity. See [Connect to Temporal Service from a Worker](#connect-to-temporal-service-from-a-worker) for details on connecting from a Worker. ## Connect to development Temporal Service {#connect-to-development-service} To connect to a development Temporal service from a Temporal Application or from within an Activity, import the [`Connection` class](https://typescript.temporal.io/api/classes/client.Connection) from `@temporalio/client` and use [`Connection.connect`](https://typescript.temporal.io/api/classes/client.Connection#connect) to create a Connection object to connect to the Temporal Service. Then pass in that connection when you create a new `Client` instance. If you leave the connection options empty, the SDK defaults to connecting to `127.0.0.1:7233` in the `default` Namespace. ```ts async function run() { const connection = await Connection.connect(); const client = new Client({ connection }); } ``` If you need to connect to a Temporal Service with custom options, you can provide connection options directly in code, load them from **environment variables**, or a **TOML configuration file** using the `@temporalio/envconfig` helpers. We recommend environment variables or a configuration file for secure, repeatable configuration. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the configuration file path, the SDK looks for it at the path `~/.config/temporalio/temporal.toml` or the equivalent on your OS. Refer to [Environment Configuration](../environment-configuration.mdx) for more details about configuration files and profiles. :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines two profiles: `default` and `prod`. Each profile has its own set of connection options. ```toml title="config.toml" --- # Default profile for local development [profile.default] address = "localhost:7233" namespace = "default" --- # Optional: Add custom gRPC headers [profile.default.grpc_meta] my-custom-header = "development-value" trace-id = "dev-trace-123" --- # Production profile for Temporal Cloud [profile.prod] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" --- # TLS configuration for production [profile.prod.tls] --- # TLS auto-enables when TLS config or an API key is present --- # disabled = false client_cert_path = "/etc/temporal/certs/client.pem" client_key_path = "/etc/temporal/certs/client.key" --- # Custom headers for production [profile.prod.grpc_meta] environment = "production" service-version = "v1.2.3" ``` You can create a Temporal Client using a profile from the configuration file as follows. In this example, you load the `default` profile for local development: {/* SNIPSTART typescript-env-config-load-default-profile {"highlightedLines": "17-19,28-29"} */} [env-config/src/load-from-file.ts](https://github.com/temporalio/samples-typescript/blob/main/env-config/src/load-from-file.ts) ```ts {17-19,28-29} async function main() { console.log('--- Loading default profile from config.toml ---'); // For this sample to be self-contained, we explicitly provide the path to // the config.toml file included in this directory. // By default though, the config.toml file will be loaded from // ~/.config/temporalio/temporal.toml (or the equivalent standard config directory on your OS). const configFile = resolve(__dirname, '../config.toml'); // loadClientConnectConfig is a helper that loads a profile and prepares // the configuration for Connection.connect and Client. By default, it loads the // "default" profile. const config = loadClientConnectConfig({ configSource: { path: configFile }, }); console.log(`Loaded 'default' profile from ${configFile}.`); console.log(` Address: ${config.connectionOptions.address}`); console.log(` Namespace: ${config.namespace}`); console.log(` gRPC Metadata: ${JSON.stringify(config.connectionOptions.metadata)}`); console.log('\nAttempting to connect to client...'); try { const connection = await Connection.connect(config.connectionOptions); const client = new Client({ connection, namespace: config.namespace }); console.log('✅ Client connected successfully!'); await connection.close(); } catch (err) { console.log(`❌ Failed to connect: ${err}`); } } main().catch((err) => { console.error(err); process.exit(1); }); ``` {/* SNIPEND */} Use the `@temporalio/envconfig` module to set connection options for the Temporal Client using environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](/references/client-environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. Set the following environment variables before running your application. Replace the placeholder values with your actual configuration. Since this is for a local development Temporal Service, the values connect to `localhost:7233` and the `default` Namespace. You may omit these variables entirely since they're the defaults. ```bash export TEMPORAL_NAMESPACE="default" export TEMPORAL_ADDRESS="localhost:7233" ``` After setting the environment variables, use the following code to create the Temporal Client. Since the environment variables take precedence, they will override any values set in the configuration file. Therefore, you may leave `loadClientConnectConfig`'s arguments empty: {/* SNIPSTART typescript-env-config-load-default-profile {"highlightedLines": "7,17-18", "selectedLines": ["1-5","17","19","22-40"]} */} [env-config/src/load-from-file.ts](https://github.com/temporalio/samples-typescript/blob/main/env-config/src/load-from-file.ts) ```ts {7,17-18} async function main() { // ... const config = loadClientConnectConfig({ // ... }); // ... console.log(` Address: ${config.connectionOptions.address}`); console.log(` Namespace: ${config.namespace}`); console.log(` gRPC Metadata: ${JSON.stringify(config.connectionOptions.metadata)}`); console.log('\nAttempting to connect to client...'); try { const connection = await Connection.connect(config.connectionOptions); const client = new Client({ connection, namespace: config.namespace }); console.log('✅ Client connected successfully!'); await connection.close(); } catch (err) { console.log(`❌ Failed to connect: ${err}`); } } main().catch((err) => { console.error(err); process.exit(1); }); ``` {/* SNIPEND */} If you don't want to use environment variables or a configuration file, you can specify connection options directly in code. This is convenient for local development and testing. You can also load a base configuration from environment variables or a configuration file, and then override specific options in code. ```ts const connection = await Connection.connect({ address: , tls: true, apiKey: , }); const client = new Client({ connection, namespace: ., }); ``` ## Connect to Temporal Cloud {#connect-to-temporal-cloud} You can connect to Temporal Cloud using either an [API key](/cloud/api-keys) or through mTLS. Connection to Temporal Cloud or any secured Temporal Service requires additional connection options compared to connecting to an unsecured local development instance: - Your credentials for authentication. - If you are using an API key, provide the API key value. - If you are using mTLS, provide the mTLS CA certificate and mTLS private key. - Your _Namespace and Account ID_ combination, which follows the format `.`. - The _endpoint_ may vary. The most common endpoint used is the gRPC regional endpoint, which follows the format: `..api.temporal.io:7233`. - For Namespaces with High Availability features with API key authentication enabled, use the gRPC Namespace endpoint: `..tmprl.cloud:7233`. This allows automated failover without needing to switch endpoints. You can find the Namespace and Account ID, as well as the endpoint, on the Namespaces tab: ![The Namespace and Account ID combination on the left, and the regional endpoint on the right](/img/cloud/apikeys/namespaces-and-regional-endpoints.png) You can provide these connection options using environment variables, a configuration file, or directly in code. You can use a TOML configuration file to set connection options for the Temporal Client. The configuration file lets you configure multiple profiles, each with its own set of connection options. You can then specify which profile to use when creating the Temporal Client. For a list of all available configuration options you can set in the TOML file, refer to [Environment Configuration](/references/client-environment-configuration). You can use the environment variable `TEMPORAL_CONFIG_FILE` to specify the location of the TOML file or provide the path to the file directly in code. If you don't provide the path to the configuration file, the SDK looks for it at the default path `~/.config/temporalio/temporal.toml`. :::info The connection options set in configuration files have lower precedence than environment variables. This means that if you set the same option in both the configuration file and as an environment variable, the environment variable value overrides the option set in the configuration file. ::: For example, the following TOML configuration file defines a `staging` profile with the necessary connection options to connect to Temporal Cloud via an API key: ```toml --- # Cloud profile for Temporal Cloud [profile.staging] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" ``` If you want to use mTLS authentication instead of an API key, replace the `api_key` field with your mTLS certificate and private key: ```toml --- # Cloud profile for Temporal Cloud [profile.staging] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" tls_client_cert_data = "your-tls-client-cert-data" tls_client_key_path = "your-tls-client-key-path" ``` With the connections options defined in the configuration file, use the `loadClientConnectConfig` helper from `@temporalio/envconfig` to load the `staging` profile from the configuration file. You can then pass the resulting configuration to the `Connection.connect` method. After that, you then pass the `connection` object and the Namespace to the `Client` constructor to create a Temporal Client using the `staging` profile as follows. After loading the profile, you can also programmatically override specific connection options before creating the client. {/* SNIPSTART typescript-env-config-load-profile-with-overrides {"highlightedLines": "15-18,30-31"} */} [env-config/src/load-profile.ts](https://github.com/temporalio/samples-typescript/blob/main/env-config/src/load-profile.ts) ```ts {15-18,30-31} async function main() { console.log("--- Loading 'staging' profile with programmatic overrides ---"); const configFile = resolve(__dirname, '../config.toml'); const profileName = 'staging'; // The 'staging' profile in config.toml has an incorrect address (localhost:9999) // We'll programmatically override it to the correct address // Load the 'staging' profile. const config = loadClientConnectConfig({ profile: profileName, configSource: { path: configFile }, }); // Override the target host to the correct address. // This is the recommended way to override configuration values. config.connectionOptions.address = 'localhost:7233'; console.log(`\nLoaded '${profileName}' profile from ${configFile} with overrides.`); console.log(` Address: ${config.connectionOptions.address} (overridden from localhost:9999)`); console.log(` Namespace: ${config.namespace}`); console.log('\nAttempting to connect to client...'); try { const connection = await Connection.connect(config.connectionOptions); const client = new Client({ connection, namespace: config.namespace }); console.log('✅ Client connected successfully!'); await connection.close(); } catch (err) { console.log(`❌ Failed to connect: ${err}`); } } main().catch((err) => { console.error(err); process.exit(1); }); ``` {/* SNIPEND */} The following environment variables are required to connect to Temporal Cloud: - `TEMPORAL_NAMESPACE`: Your Namespace and Account ID combination in the format `.`. - `TEMPORAL_ADDRESS`: The gRPC endpoint for your Temporal Cloud Namespace. - `TEMPORAL_API_KEY`: Your API key value. Required if you are using API key authentication. - `TEMPORAL_TLS_CLIENT_CERT_DATA` or `TEMPORAL_TLS_CLIENT_CERT_PATH`: Your mTLS client certificate data or file path. Required if you are using mTLS authentication. - `TEMPORAL_TLS_CLIENT_KEY_DATA` or `TEMPORAL_TLS_CLIENT_KEY_PATH`: Your mTLS client private key data or file path. Required if you are using mTLS authentication. Ensure these environment variables exist in your environment before running your application. Import the `EnvConfig` package to set connection options for the Temporal Client using environment variables. The `MustLoadDefaultClientOptions` function will automatically load all environment variables. For a list of all available environment variables and their default values, refer to [Environment Configuration](/references/client-environment-configuration). For example, the following code snippet loads all environment variables and creates a Temporal Client with the options specified in those variables. If you have defined a configuration file at either the default location (`~/.config/temporalio/temporal.toml`) or a custom location specified by the `TEMPORAL_CONFIG_FILE` environment variable, this will also load the default profile in the configuration file. However, any options set via environment variables will take precedence. {/* SNIPSTART typescript-env-config-load-default-profile {"highlightedLines": "17-19,28-29", "selectedLines": ["1-5","17","19","22-40"]} */} [env-config/src/load-from-file.ts](https://github.com/temporalio/samples-typescript/blob/main/env-config/src/load-from-file.ts) ```ts {17-19,28-29} async function main() { // ... const config = loadClientConnectConfig({ // ... }); // ... console.log(` Address: ${config.connectionOptions.address}`); console.log(` Namespace: ${config.namespace}`); console.log(` gRPC Metadata: ${JSON.stringify(config.connectionOptions.metadata)}`); console.log('\nAttempting to connect to client...'); try { const connection = await Connection.connect(config.connectionOptions); const client = new Client({ connection, namespace: config.namespace }); console.log('✅ Client connected successfully!'); await connection.close(); } catch (err) { console.log(`❌ Failed to connect: ${err}`); } } main().catch((err) => { console.error(err); process.exit(1); }); ``` {/* SNIPEND */} You can also provide connections options in your code directly. To create an initial connection, provide the Namespace and API key values to the `Connection.connect` method. ```ts const connection = await Connection.connect({ address: , tls: true, apiKey: , }); const client = new Client({ connection, namespace: ., }); ``` To update an API key, use the `setApiKey` method on the Connection object: ```ts connection.setApiKey(); ``` ## Connect to Temporal Service from a Worker {#connect-to-temporal-service-from-a-worker} Connecting to Temporal Service from a Worker requires the same set of connections options as connecting from a Temporal Application or from within an Activity, but the connection type is different. When connecting from a Worker, you create a `NativeConnection` object instead of a `Connection` object. The [`NativeConnection` class](https://typescript.temporal.io/api/classes/worker.NativeConnection) is imported from `@temporalio/worker` instead of `@temporalio/client`. After you create the `NativeConnection` object, you pass it to `Worker.create()` when creating the Worker. To provide connection options to the `NativeConnection`, you can use environment variables, a configuration file, or directly in code. The following code snippets show how to create a `NativeConnection` object using each method. Refer to [Connect to a local development Temporal Service](#connect-to-development-service) and [Connect to Temporal Cloud](#connect-to-temporal-cloud) for details on how to provide connection options using each method. Ensure you have a TOML configuration file with the necessary connection options defined. For example, the following TOML configuration file defines a `staging` profile with the necessary connection options to connect to Temporal Cloud via an API key: ```toml --- # Cloud profile for Temporal Cloud [profile.staging] address = "your-namespace.a1b2c.tmprl.cloud:7233" namespace = "your-namespace" api_key = "your-api-key-here" ``` Use the `loadClientConnectConfig` helper from `@temporalio/envconfig` to load the `staging` profile from the configuration file and create a `NativeConnection` object as follows: ```ts {1,15,17} async function main() { const configFile = resolve(__dirname, '../config.toml'); const profileName = 'staging' // Load the 'staging' profile. const config = loadClientConnectConfig({ profile: profileName, configSource: { path: configFile }, }); const connection = await NativeConnection.connect(config.connectionOptions); const worker = await Worker.create({ connection, namespace: ., // ... }); } ``` Ensure you have set the necessary environment variables to connect to Temporal Cloud. For example: ```bash export TEMPORAL_NAMESPACE="your-namespace.your-account-id" export TEMPORAL_ADDRESS="your-namespace.a1b2c.tmprl.cloud:7233" export TEMPORAL_TLS_CLIENT_CERT_PATH="/path/to/your/client/cert.pem" export TEMPORAL_TLS_CLIENT_KEY_PATH="/path/to/your/client/key.pem" ``` After setting the environment variables, use the following code to create a `NativeConnection` object using the `loadClientConnectConfig` helper from `@temporalio/envconfig`: ```ts {1,5} async function main() { const config = loadClientConnectConfig(); const connection = await NativeConnection.connect(config.connectionOptions); const worker = await Worker.create({ connection, namespace: process.env.TEMPORAL_NAMESPACE, // ... }); } ``` You can also provide connections options in your TypeScript code directly. To create an initial connection, provide the connections to the ` NativeConnection.connect` method, and then pass the resulting `NativeConnection` object to `Worker.create()` when creating the Worker: ```ts {1,4,9} const connection = await NativeConnection.connect({ address: , tls: true, apiKey: , }); const worker = await Worker.create({ connection, namespace: ., // ... }); ``` ## NativeConnection, Connection, and Client `NativeConnection`, `Connection`, and `Client` are all classes provided by the TypeScript SDK to facilitate communication with the Temporal Service. This section explains the differences between these classes and their respective use cases. For detailed information about each class, refer to the [Temporal TypeScript API documentation](https://typescript.temporal.io/api/namespaces/client). ### NativeConnection vs. Connection {#native-connection-vs-connection} The TypeScript SDK provides two types of connection classes to connect to the Temporal Service: `NativeConnection` and `Connection`. The `NativeConnection` class is used to connect from a Worker, while the `Connection` class is used to connect from a Temporal Application or from within an Activity, typically through a `Client` object. Both connection classes accept the same set of connection options. ### Connection vs. Client {#connection-vs-client} A `Client` object is a high-level, lightweight abstraction that simplifies interaction with the Temporal Service. It internally manages a `Connection` object to handle the low-level communication details. The `Client` class provides convenient methods for common operations such as starting Workflow Executions, sending Signals and Queries, and retrieving Workflow results. A `Connection` object is a lower-level and expensive object that represents a direct connection to the Temporal Service. You pass in a `Connection` object to the `Client` constructor to create a `Client` instance. Since a `Connection` is expensive to create, create a single `Connection` object and reuse it across your application whenever possible. When instantiating a `Connection`, you specify most connection options except for the Namespace, such as the Temporal Service endpoint, TLS settings, and authentication credentials. When instantiating a `Client`, you provide the `Connection` object and the Namespace you want to connect to, along with other client options. ## Start Workflow Execution {#start-workflow-execution} **How to start a Workflow Execution using the Typescript SDK** [Workflow Execution](/workflow-execution) semantics rely on several parameters—that is, to start a Workflow Execution you must supply a Task Queue that will be used for the Tasks (one that a Worker is polling), the Workflow Type, language-specific contextual data, and Workflow Function parameters. In the examples below, all Workflow Executions are started using a Temporal Client. To spawn Workflow Executions from within another Workflow Execution, use either the Child Workflow or External Workflow APIs. See the [Customize Workflow Type](/develop/typescript/core-application#workflow-type) section to see how to customize the name of the Workflow Type. A request to spawn a Workflow Execution causes the Temporal Service to create the first Event ([WorkflowExecutionStarted](/references/events#workflowexecutionstarted)) in the Workflow Execution Event History. The Temporal Service then creates the first Workflow Task, resulting in the first [WorkflowTaskScheduled](/references/events#workflowtaskscheduled) Event. When you have a Client, you can schedule the start of a Workflow with `client.workflow.start()`, specifying `workflowId`, `taskQueue`, and `args` and returning a Workflow handle immediately after the Server acknowledges the receipt. ```typescript const handle = await client.workflow.start(example, { workflowId: 'your-workflow-id', taskQueue: 'your-task-queue', args: ['argument01', 'argument02', 'argument03'], // this is typechecked against workflowFn's args }); const handle = client.getHandle(workflowId); const result = await handle.result(); ``` Calling `client.workflow.start()` and `client.workflow.execute()` send a command to Temporal Server to schedule a new Workflow Execution on the specified Task Queue. It does not actually start until a Worker that has a matching Workflow Type, polling that Task Queue, picks it up. You can test this by executing a Client command without a matching Worker. Temporal Server records the command in Event History, but does not make progress with the Workflow Execution until a Worker starts polling with a matching Task Queue and Workflow Definition. Workflow Execution run in a separate V8 isolate context in order to provide a [deterministic runtime](/workflow-definition#deterministic-constraints). ### Set a Workflow's Task Queue {#set-task-queue} In most SDKs, the only Workflow Option that must be set is the name of the [Task Queue](/task-queue). For any code to execute, a Worker Process must be running that contains a Worker Entity that is polling the same Task Queue name. A Task Queue is a dynamic queue in Temporal polled by one or more Workers. Workers bundle Workflow code and node modules using Webpack v5 and execute them inside V8 isolates. Activities are directly required and run by Workers in the Node.js environment. Workers are flexible. You can host any or all of your Workflows and Activities on a Worker, and you can host multiple Workers on a single machine. The Worker need three main things: - `taskQueue`: The Task Queue to poll. This is the only required argument. - `activities`: Optional. Imported and supplied directly to the Worker. - Workflow bundle. Choose one of the following options: - Specify `workflowsPath` pointing to your `workflows.ts` file to pass to Webpack; for example, `require.resolve('./workflows')`. Workflows are bundled with their dependencies. - If you prefer to handle the bundling yourself, pass a prebuilt bundle to `workflowBundle`. ```ts async function run() { // Step 1: Register Workflows and Activities with the Worker and connect to // the Temporal server. const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), activities, taskQueue: 'hello-world', }); // Worker connects to localhost by default and uses console.error for logging. // Customize the Worker by passing more options to create(): // https://typescript.temporal.io/api/classes/worker.Worker // If you need to configure server connection parameters, see docs: // /typescript/security#encryption-in-transit-with-mtls // Step 2: Start accepting tasks on the `tutorial` queue await worker.run(); } run().catch((err) => { console.error(err); process.exit(1); }); ``` `taskQueue` is the only required option; however, use `workflowsPath` and `activities` to register Workflows and Activities with the Worker. When scheduling a Workflow, you must specify `taskQueue`. ```ts // This is the code that is used to start a Workflow. const connection = await Connection.create(); const client = new Client({ connection }); const result = await client.workflow.execute(yourWorkflow, { // required taskQueue: 'your-task-queue', // required workflowId: 'your-workflow-id', }); ``` When creating a Worker, you must pass the `taskQueue` option to the `Worker.create()` function. ```ts const worker = await Worker.create({ // imported elsewhere activities, taskQueue: 'your-task-queue', }); ``` Optionally, in Workflow code, when calling an Activity, you can specify the Task Queue by passing the `taskQueue` option to `proxyActivities()`, `startChild()`, or `executeChild()`. If you do not specify `taskQueue`, the TypeScript SDK places Activity and Child Workflow Tasks in the same Task Queue as the Workflow Task Queue. ### Set a Workflow Id {#workflow-id} Although it is not required, we recommend providing your own [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)that maps to a business process or business entity identifier, such as an order identifier or customer identifier. Connect to a Client with `client.workflow.start()` and any arguments. Then specify your `taskQueue` and set your `workflowId` to a meaningful business identifier. ```typescript const handle = await client.workflow.start(example, { workflowId: 'yourWorkflowId', taskQueue: 'yourTaskQueue', args: ['your', 'arg', 'uments'], }); ``` This starts a new Client with the given Workflow Id, Task Queue name, and an argument. ### Get the results of a Workflow Execution {#get-workflow-results} If the call to start a Workflow Execution is successful, you will gain access to the Workflow Execution's Run Id. The Workflow Id, Run Id, and Namespace may be used to uniquely identify a Workflow Execution in the system and get its result. It's possible to both block progress on the result (synchronous execution) or get the result at some other point in time (asynchronous execution). In the Temporal Platform, it's also acceptable to use Queries as the preferred method for accessing the state and results of Workflow Executions. To return the results of a Workflow Execution: ```typescript return 'Completed ' + wf.workflowInfo().workflowId + ', Total Charged: ' + totalCharged; ``` `totalCharged` is just a function declared in your code. For a full example, see [subscription-workflow-project-template-typescript/src/workflows.ts](https://github.com/temporalio/subscription-workflow-project-template-typescript/blob/main/src/workflows.ts). A Workflow function may return a result. If it doesn’t (in which case the return type is `Promise`), the result will be `undefined`. If you started a Workflow with `client.workflow.start()`, you can choose to wait for the result anytime with `handle.result()`. ```typescript const handle = client.getHandle(workflowId); const result = await handle.result(); ``` Using a Workflow Handle isn't necessary with `client.workflow.execute()`. Workflows that prematurely end will throw a `WorkflowFailedError` if you call `result()`. If you call `result()` on a Workflow that prematurely ended for some reason, it throws a [`WorkflowFailedError` error](https://typescript.temporal.io/api/classes/client.WorkflowFailedError/) that reflects the reason. For that reason, it is recommended to catch that error. ```typescript const handle = client.getHandle(workflowId); try { const result = await handle.result(); } catch (err) { if (err instanceof WorkflowFailedError) { throw new Error('Temporal workflow failed: ' + workflowId, { cause: err, }); } else { throw new Error('error from Temporal workflow ' + workflowId, { cause: err, }); } } ``` --- ## Temporal Nexus - TypeScript SDK Feature Guide :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal TypeScript SDK support for Nexus is at [Pre-release](/evaluate/development-production-features/release-stages#pre-release). All APIs are experimental and may be subject to backwards-incompatible changes. ::: Use [Temporal Nexus](/evaluate/nexus) to connect Temporal Applications within and across Namespaces using a Nexus Endpoint, a Nexus Service contract, and Nexus Operations. This page shows how to do the following: - [Run a development Temporal Service with Nexus enabled](#run-the-temporal-nexus-development-server) - [Create caller and handler Namespaces](#create-caller-handler-namespaces) - [Create a Nexus Endpoint to route requests from caller to handler](#create-nexus-endpoint) - [Define the Nexus Service contract](#define-nexus-service-contract) - [Develop a Nexus Service and Operation handlers](#develop-nexus-service-operation-handlers) - [Develop a caller Workflow that uses a Nexus Service](#develop-caller-workflow-nexus-service) - [Understand exceptions in Nexus Operations](#exceptions-in-nexus-operations) - [Cancel a Nexus Operation](#canceling-a-nexus-operation) - [Make Nexus calls across Namespaces in Temporal Cloud](#nexus-calls-across-namespaces-temporal-cloud) :::note This documentation uses source code derived from the [TypeScript Nexus sample](https://github.com/temporalio/samples-typescript/tree/main/nexus-hello). ::: ## Run the Temporal Development Server with Nexus enabled {#run-the-temporal-nexus-development-server} Prerequisites: - [Install the latest Temporal CLI](https://learn.temporal.io/getting_started/typescript/dev_environment/#set-up-a-local-temporal-service-for-development-with-temporal-cli) (`v1.3.0` or higher recommended) - [Install the latest Temporal TypeScript SDK](https://learn.temporal.io/getting_started/typescript/dev_environment/#add-temporal-typescript-sdk-dependencies) (`v1.12.3` or higher) The first step in working with Temporal Nexus involves starting a Temporal Server with Nexus enabled. ``` temporal server start-dev ``` This command automatically starts the Temporal development server with the Web UI, and creates the `default` Namespace. It uses an in-memory database, so do not use it for real use cases. The Temporal Web UI should now be accessible at [http://localhost:8233](http://localhost:8233), and the Temporal Server should now be available for client connections on `localhost:7233`. ## Create caller and handler Namespaces {#create-caller-handler-namespaces} Before setting up Nexus endpoints, create separate Namespaces for the caller and handler. ``` temporal operator namespace create --namespace my-target-namespace temporal operator namespace create --namespace my-caller-namespace ``` For this example, `my-target-namespace` will contain the Nexus Operation handler, and you will use a Workflow in `my-caller-namespace` to call that Operation handler. We use different namespaces to demonstrate cross-Namespace Nexus calls. ## Create a Nexus Endpoint to route requests from caller to handler {#create-nexus-endpoint} After establishing caller and handler Namespaces, the next step is to create a Nexus Endpoint to route requests. ``` temporal operator nexus endpoint create \ --name my-nexus-endpoint-name \ --target-namespace my-target-namespace \ --target-task-queue my-handler-task-queue ``` You can also use the Web UI to create the Namespaces and Nexus endpoint. ## Define the Nexus Service contract {#define-nexus-service-contract} Defining a clear contract for the Nexus Service is crucial for smooth communication. In this example, there is a service package that describes the Service and Operation names along with input/output types for caller Workflows to use the Nexus Endpoint. Each [Temporal SDK includes and uses a default Data Converter](https://docs.temporal.io/dataconversion). The default data converter encodes payloads in the following order: Null, Byte array, and JSON. In a polyglot environment, that is where more than one language and SDK is being used to develop a Temporal solution, JSON is a common choice. This example uses plain TypeScript objects, serialized into JSON. Note: By default, the TypeScript SDK [does not support Protobuf JSON encoding](https://typescript.temporal.io/api/interfaces/common.PayloadConverter). If passing Protobuf payloads use the [ProtobufJsonPayloadConverter](https://typescript.temporal.io/api/classes/protobufs.ProtobufJsonPayloadConverter) instead. [nexus-hello/src/api.ts](https://github.com/temporalio/samples-typescript/blob/main/nexus-hello/src/api.ts) ```ts export const helloService = nexus.service('hello', { /** * Return the input message, unmodified. In the present sample, this Operation * will be implemented using the Synchronous Nexus Operation handler syntax. */ echo: nexus.operation(), /** * Return a salutation message, in the requested language. In the present sample, * this Operation will be implemented by starting the `helloWorkflow` Workflow. */ hello: nexus.operation(), }); export interface EchoInput { message: string; } export interface EchoOutput { message: string; } export interface HelloInput { name: string; language: LanguageCode; } export interface HelloOutput { message: string; } export type LanguageCode = 'en' | 'fr' | 'de' | 'es' | 'tr'; ``` ## Develop a Nexus Service handler and Operation handlers {#develop-nexus-service-operation-handlers} A Nexus Service handler is defined using the `nexus-rpc`'s [`serviceHandler`](https://nexus-rpc.github.io/sdk-typescript/functions/serviceHandler.html) function. {/* Added */} Nexus Service handlers are typically defined in the same Worker as the underlying Temporal primitives they abstract. A Service handler must provide Operation handlers for each Operation declared by the Service. {/* Added */} Operation handlers can decide if a given Nexus Operation will be synchronous or asynchronous. They can execute arbitrary code, and invoke underlying Temporal primitives such as a Workflow, Query, Signal, or Update. The `@temporalio/nexus` package provides utilities to help create Nexus Operations that interracts with a Temporal namespace: {/* Extended */} - `WorkflowRunOperationHandler` - Create an asynchronous operation handler that starts a Workflow. - `getClient()` - Get a Temporal Client connected using the same `NativeConnection` as the present Temporal Worker. It can be used to implement synchronous handlers backed by Temporal primitives such as Signals and Queries. ### Develop a Synchronous Nexus Operation handler Simple RPC handlers can be implemented as synchronous Nexus Operation handlers, which is defined in TypeScript as a simple async function. {/* sync operation vs async func is very confusing in this context */} The handler function can obtain a Temporal Client, using `getClient()`, which can be used for signaling, querying, and listing Workflows. However, implementations are free to make arbitrary calls to other services or databases, or perform computations such as this one: [nexus-hello/src/service/handler.ts](https://github.com/temporalio/samples-typescript/blob/main/nexus-hello/src/service/handler.ts) ```ts // ... // ... // ... export const helloServiceHandler = nexus.serviceHandler(helloService, { echo: async (ctx, input: EchoInput): Promise => { // A simple async function can be used to defined a Synchronous Nexus Operation. // This is often sufficient for Operations that simply make arbitrary short calls to // other services or databases, or that perform simple computations such as this one. // // You may also access a Temporal Client by calling `temporalNexus.getClient()`. // That Client can be used to make arbitrary calls, such as signaling, querying, // or listing workflows. return input; }, // ... }); ``` ### Develop an Asynchronous Nexus Operation handler to start a Workflow Use `@temporalio/nexus`'s `WorkflowRunOperationHandler` helper class to easily expose a Temporal Workflow as a Nexus Operation. Note that even though a Nexus operation can only take one input parameter, if you need to pass multiple arguments through to the workflow, you can do so by using multiple properties of the input object, and placing them in the array provided to the `args` option when calling `startWorkflow`. [nexus-hello/src/service/handler.ts](https://github.com/temporalio/samples-typescript/blob/main/nexus-hello/src/service/handler.ts) ```ts export const helloServiceHandler = nexus.serviceHandler(helloService, { // ... hello: new temporalNexus.WorkflowRunOperationHandler( // WorkflowRunOperationHandler takes a function that receives the Operation's context and input. // That function can be used to validate and/or transform the input before passing it to // the Workflow, as well as to customize various Workflow start options as appropriate. // Call temporalNexus.startWorkflow() to actually start the Workflow from inside the // WorkflowRunOperationHandler's delegate function. async (ctx, input: HelloInput) => { return await temporalNexus.startWorkflow(ctx, helloWorkflow, { args: [input], // Workflow IDs should typically be business-meaningful IDs and are used to dedupe workflow starts. // For this example, we're using the request ID allocated by Temporal when the caller workflow schedules // the operation, this ID is guaranteed to be stable across retries of this operation. workflowId: ctx.requestId ?? randomUUID(), // Task queue defaults to the task queue this Operation is handled on. }); }, ), }); ``` Workflow IDs should typically be business-meaningful IDs and are used to dedupe Workflow starts. In general, the ID should be passed in the Operation input as part of the Nexus Service contract. :::tip RESOURCES [Attach multiple Nexus callers to a handler Workflow](/nexus/operations#attaching-multiple-nexus-callers) with a Conflict-Policy of Use-Existing. ::: ### Register your Nexus Service handler in a Worker After developing an asynchronous Nexus Operation handler to start a Workflow, the next step is to register your Nexus Service handler in a Worker. [nexus-hello/src/service/worker.ts](https://github.com/temporalio/samples-typescript/blob/main/nexus-hello/src/service/worker.ts) ```ts // ... const namespace = 'my-target-namespace'; const serviceTaskQueue = 'my-handler-task-queue'; const worker = await Worker.create({ connection, namespace, taskQueue: serviceTaskQueue, workflowsPath: require.resolve('./workflows'), nexusServices: [helloServiceHandler], }); ``` ## Develop a caller Workflow that uses the Nexus Service {#develop-caller-workflow-nexus-service} To execute a Nexus Operation from a Workflow, import the necessary service definition types, then use `@temporalio/workflow`'s `createNexusClient` to create a Nexus client for that service. You will need to provide the Nexus Endpoint name, which you registered previously in [Create a Nexus Endpoint to route requests from caller to handler](#create-nexus-endpoint). [nexus-hello/src/caller/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/nexus-hello/src/caller/workflows.ts) ```ts const HELLO_SERVICE_ENDPOINT = "hello-service-endpoint-name"; export async function helloCallerWorkflow(name: string, language: LanguageCode): Promise { const nexusClient = wf.createNexusClient({ service: helloService, endpoint: HELLO_SERVICE_ENDPOINT, }); const helloResult = await nexusClient.executeOperation( "hello", { name, language }, { scheduleToCloseTimeout: "10s" } ); return helloResult.message; } ``` ### Register the caller Workflow in a Worker and start the caller Workflow This Workflow can be registered with a Worker and started using `client.startWorkflow()` or `client.executeWorkflow()`, as usual. Refer to the [complete TypeScript sample](https://github.com/temporalio/samples-typescript/blob/main/nexus-hello) for reference. - [nexus-hello/src/caller/worker.ts](https://github.com/temporalio/samples-typescript/blob/main/nexus-hello/src/caller/worker.ts) shows how to register the caller Workflow in a Worker and run the Worker. - [nexus-hello/src/starter.ts](https://github.com/temporalio/samples-typescript/blob/main/nexus-hello/src/starter.ts) shows how to use a Temporal Client to execute the sample caller Workflow. ## Exceptions in Nexus operations {#exceptions-in-nexus-operations} Temporal provides general guidance on [Errors in Nexus operations](https://docs.temporal.io/references/failures#errors-in-nexus-operations). In TypeScript, there are three Nexus-specific exception classes: - `nexus-rpc`'s [`OperationError`](https://nexus-rpc.github.io/sdk-typescript/classes/OperationError.html): this is the exception type you should throw in a Nexus operation to indicate that it has failed according to its own application logic and should not be retried. - `nexus-rpc`'s [`HandlerError`](https://nexus-rpc.github.io/sdk-typescript/classes/HandlerError.html): you can throw this exception type in a Nexus operation with a specific [HandlerErrorType](https://nexus-rpc.github.io/sdk-typescript/types/HandlerErrorType.html). The error will be marked as either retryable or non-retryable according to the type, following the [Nexus spec](https://github.com/nexus-rpc/api/blob/main/SPEC.md#predefined-handler-errors). The non-retryable handler error types are `BAD_REQUEST`, `UNAUTHENTICATED`, `UNAUTHORIZED`, `NOT_FOUND`, `NOT_IMPLEMENTED`; the retryable types are `RESOURCE_EXHAUSTED`, `INTERNAL`, `UNAVAILABLE`, `UPSTREAM_TIMEOUT`. - `@temporalio/nexus`'s [`NexusOperationFailure`](https://typescript.temporal.io/api/classes/common.NexusOperationFailure): this is the error thrown inside a Workflow when a Nexus operation fails for any reason. Use the `cause` attribute on the exception to access the cause chain. ## Canceling a Nexus Operation {#canceling-a-nexus-operation} Nexus Operations, just like other cancellable APIs provided by the `@temporalio/workflow` package, execute within Cancellation Scopes. Requesting cancellation of a Cancellation Scope results in requesting cancellation for all cancellable operations owned by that scope. The Workflow itself defines the root Cancellation Scope. Requesting cancellation of the Workflow therefore propagates the cancellation request to all cancellable operations started by that workflow, including Nexus Operations. To provide more granular control over cancellation of a specific Nexus Operation, you may explicitly create a new Cancellation Scope, and start the Nexus Operation from within that scope. An example demonstrating this can be found at our [nexus cancellation sample](https://github.com/temporalio/samples-typescript/tree/main/nexus-cancellation). Only asynchronous operations can be canceled in Nexus, since cancellation is sent using an operation token. The Workflow or other resources backing the operation may choose to ignore the cancellation request. Once the caller Workflow completes, the caller's Nexus Machinery will not make any further attempts to cancel operations that are still running. It's okay to leave operations running in some use cases. To ensure cancellations are delivered, wait for all pending operations to finish before exiting the Workflow. ## Make Nexus calls across Namespaces in Temporal Cloud {#nexus-calls-across-namespaces-temporal-cloud} This section assumes you are already familiar with how to connect a Worker to Temporal Cloud. The `tcld` CLI is used to create Namespaces and the Nexus Endpoint, and mTLS client certificates will be used to securely connect the caller and handler Workers to their respective Temporal Cloud Namespaces. ### Install the latest `tcld` CLI and generate certificates To install the latest version of the `tcld` CLI, run the following command (on macOS): ``` brew install temporalio/brew/tcld ``` If you don't already have certificates, you can generate them for mTLS Worker authentication using the command below: ``` tcld gen ca --org $YOUR_ORG_NAME --validity-period 1y --ca-cert ca.pem --ca-key ca.key ``` These certificates will be valid for one year. ### Create caller and handler Namespaces Before deploying to Temporal Cloud, ensure that the appropriate Namespaces are created for both the caller and handler. If you already have these Namespaces, you don't need to do this. ``` tcld login tcld namespace create \ --namespace \ --cloud-provider aws \ --region us-west-2 \ --ca-certificate-file 'path/to/your/ca.pem' \ --retention-days 1 tcld namespace create \ --namespace \ --cloud-provider aws \ --region us-west-2 \ --ca-certificate-file 'path/to/your/ca.pem' \ --retention-days 1 ``` Alternatively, you can create Namespaces through the UI: [https://cloud.temporal.io/namespaces](https://cloud.temporal.io/namespaces). ### Create a Nexus Endpoint to route requests from caller to handler To create a Nexus Endpoint you must have a Developer account role or higher, and have NamespaceAdmin permission on the `--target-namespace`. ``` tcld nexus endpoint create \ --name \ --target-task-queue my-handler-task-queue \ --target-namespace \ --allow-namespace \ --description-file description.md ``` The `--allow-namespace` is used to build an Endpoint allowlist of caller Namespaces that can use the Nexus Endpoint, as described in Runtime Access Control. Alternatively, you can create a Nexus Endpoint through the UI: [https://cloud.temporal.io/nexus](https://cloud.temporal.io/nexus). ## Observability ### Web UI A synchronous Nexus Operation will surface in the caller Workflow as follows, with just `NexusOperationScheduled` and `NexusOperationCompleted` events in the caller's Workflow history: An asynchronous Nexus Operation will surface in the caller Workflow as follows, with `NexusOperationScheduled`, `NexusOperationStarted`, and `NexusOperationCompleted`, in the caller's Workflow history: ### Temporal CLI Use the `workflow describe` command to show pending Nexus Operations in the caller Workflow and any attached callbacks on the handler Workflow: ``` temporal workflow describe -w ``` Nexus events are included in the caller's Workflow history: ``` temporal workflow show -w ``` For **asynchronous Nexus Operations** the following are reported in the caller's history: - `NexusOperationScheduled` - `NexusOperationStarted` - `NexusOperationCompleted` For **synchronous Nexus Operations** the following are reported in the caller's history: - `NexusOperationScheduled` - `NexusOperationCompleted` :::note `NexusOperationStarted` isn't reported in the caller's history for synchronous operations. ::: ## Learn more - Read the high-level description of the [Temporal Nexus feature](/evaluate/nexus) and watch the [Nexus keynote and demo](https://youtu.be/qqc2vsv1mrU?feature=shared&t=2082). - Learn how Nexus works in the [Nexus deep dive talk](https://www.youtube.com/watch?v=izR9dQ_eIe4) and [Encyclopedia](/nexus). - Deploy Nexus Endpoints in production with [Temporal Cloud](/cloud/nexus). --- ## Testing - TypeScript SDK The Testing section of the Temporal Application development guide describes the frameworks that facilitate Workflow and integration testing. In the context of Temporal, you can create these types of automated tests: - **End-to-end:** Running a Temporal Server and Worker with all its Workflows and Activities; starting and interacting with Workflows from a Client. - **Integration:** Anything between end-to-end and unit testing. - Running Activities with mocked Context and other SDK imports (and usually network requests). - Running Workers with mock Activities, and using a Client to start Workflows. - Running Workflows with mocked SDK imports. - **Unit:** Running a piece of Workflow or Activity code (a function or method) and mocking any code it calls. We generally recommend writing the majority of your tests as integration tests. Because the test server supports skipping time, use the test server for both end-to-end and integration tests with Workers. ## Test frameworks {#test-frameworks} Some SDKs have support or examples for popular test frameworks, runners, or libraries. TypeScript has sample tests for [Jest](https://jestjs.io/) and [Mocha](https://mochajs.org/). **Jest** - Minimum Jest version: `27.0.0` - [Sample test file](https://github.com/temporalio/samples-typescript/blob/main/activities-examples/src/workflows.test.ts) - [`jest.config.js`](https://github.com/temporalio/samples-typescript/blob/main/activities-examples/jest.config.js) (must use [`testEnvironment: 'node'`](https://jestjs.io/docs/configuration#testenvironment-string); `testEnvironment: 'jsdom'` is not supported) **Mocha** - [Sample test file](https://github.com/temporalio/samples-typescript/blob/main/activities-examples/src/mocha/workflows.test.ts) - Test coverage library: [`@temporalio/nyc-test-coverage`](https://github.com/temporalio/sdk-typescript/tree/main/packages/nyc-test-coverage) ## Testing Activities {#test-activities} An Activity can be tested with a mock Activity environment, which provides a way to mock the Activity context, listen to Heartbeats, and cancel the Activity. This behavior allows you to test the Activity in isolation by calling it directly, without needing to create a Worker to run the Activity. ### Run an Activity {#run-an-activity} If an Activity references its context, you need to mock that context when testing in isolation. First, create a [`MockActivityEnvironment`](https://typescript.temporal.io/api/classes/testing.MockActivityEnvironment). The constructor accepts an optional partial Activity [`Info`](https://typescript.temporal.io/api/interfaces/activity.Info) object in case any info fields are needed for the test. Then use [`MockActivityEnvironment.run()`](https://typescript.temporal.io/api/classes/testing.MockActivityEnvironment#run) to run a function in an Activity [Context](https://typescript.temporal.io/api/classes/activity.Context). ```ts // A function that takes two numbers and returns a promise that resolves to the sum of the two numbers // and the current attempt. async function activityFoo(a: number, b: number): Promise { return a + b + activityInfo().attempt; } // Create a MockActivityEnvironment with attempt set to 2. Run the activityFoo // function with parameters 5 and 35. Assert that the result is 42. const env = new MockActivityEnvironment({ attempt: 2 }); const result = await env.run(activityFoo, 5, 35); assert.equal(result, 42); ``` ### Listen to Heartbeats {#listen-to-heartbeats} When an Activity sends a Heartbeat, be sure that you can see the Heartbeats in your test code so that you can verify them. [`MockActivityEnvironment`](https://typescript.temporal.io/api/classes/testing.MockActivityEnvironment) is an [`EventEmitter`](https://nodejs.org/api/events.html#class-eventemitter) that emits a `heartbeat` event that you can use to listen for Heartbeats emitted by the Activity. When an Activity is run by a Worker, Heartbeats are throttled to avoid overloading the server. `MockActivityEnvironment`, however, does not throttle Heartbeats. ```ts async function activityFoo(): Promise { heartbeat(6); } const env = new MockActivityEnvironment(); env.on('heartbeat', (d: unknown) => { assert(d === 6); }); await env.run(activityFoo); ``` ### Cancel an Activity {#cancel-an-activity} If an Activity is supposed to react to a Cancellation, you can test whether it reacts correctly by canceling it. [`MockActivityEnvironment`](https://typescript.temporal.io/api/classes/testing.MockActivityEnvironment) exposes a [`.cancel()`](https://typescript.temporal.io/api/classes/testing.MockActivityEnvironment#cancel) method that cancels the Activity Context. ```ts async function activityFoo(): Promise { heartbeat(6); // @temporalio/activity's sleep() is Cancellation-aware, which means that on Cancellation, // CancelledFailure will be thrown from it. await sleep(100); } const env = new MockActivityEnvironment(); env.on('heartbeat', (d: unknown) => { assert(d === 6); }); await assert.rejects(env.run(activityFoo), (err) => { assert.ok(err instanceof CancelledFailure); }); ``` ## Testing Workflows {#test-workflows} ### How to mock Activities {#mock-activities} Mock the Activity invocation when unit testing your Workflows. When integration testing Workflows with a Worker, you can mock Activities by providing mock Activity implementations to the Worker. Implement only the relevant Activities for the Workflow being tested. ```ts // Creating a mock object of the activities. const mockActivities: Partial = { makeHTTPRequest: async () => '99', }; // Creating a worker with the mocked activities. const worker = await Worker.create({ activities: mockActivities, // ... }); ``` ### How to skip time {#skip-time} Some long-running Workflows can persist for months or even years. Implementing the test framework allows your Workflow code to skip time and complete your tests in seconds rather than the Workflow's specified amount. For example, if you have a Workflow sleep for a day, or have an Activity failure with a long retry interval, you don't need to wait the entire length of the sleep period to test whether the sleep function works. Instead, test the logic that happens after the sleep by skipping forward in time and complete your tests in a timely manner. The test framework included in most SDKs is an in-memory implementation of Temporal Server that supports skipping time. Time is a global property of an instance of `TestWorkflowEnvironment`: skipping time (either automatically or manually) applies to all currently running tests. If you need different time behaviors for different tests, run your tests in a series or with separate instances of the test server. For example, you could run all tests with automatic time skipping in parallel, and then all tests with manual time skipping in series, and then all tests without time skipping in parallel. #### Set up time skipping {#setting-up} Set up the time-skipping test framework in the SDK of your choice. ```bash npm install @temporalio/testing ``` The `@temporalio/testing` package downloads the test server and exports [`TestWorkflowEnvironment`](https://typescript.temporal.io/api/classes/testing.TestWorkflowEnvironment), which you use to connect the Client and Worker to the test server and interact with the test server. [`TestWorkflowEnvironment.createTimeSkipping`](https://typescript.temporal.io/api/classes/testing.TestWorkflowEnvironment#createtimeskipping) starts the test server. A typical test suite should set up a single instance of the test environment to be reused in all tests (for example, in a [Jest](https://jestjs.io/) `beforeAll` hook or a [Mocha](https://mochajs.org/) `before()` hook). ```typescript let testEnv: TestWorkflowEnvironment; // beforeAll and afterAll are injected by Jest beforeAll(async () => { testEnv = await TestWorkflowEnvironment.createTimeSkipping(); }); afterAll(async () => { await testEnv?.teardown(); }); ``` `TestWorkflowEnvironment` has [`client`](https://typescript.temporal.io/api/classes/testing.TestWorkflowEnvironment#client) and [`nativeConnection`](https://typescript.temporal.io/api/classes/testing.TestWorkflowEnvironment#nativeconnection) for creating Workers: ```typescript test('workflowFoo', async () => { const worker = await Worker.create({ connection: testEnv.nativeConnection, taskQueue: 'test', ... }); const result = await worker.runUntil( testEnv.client.workflow.execute(workflowFoo, { workflowId: uuid4(), taskQueue: 'test', }) ); expect(result).toEqual('foo'); }); ``` This test uses the test connection to create a Worker, runs the Worker until the Workflow is complete, and then makes an assertion about the Workflow's result. The Workflow is executed using `testEnv.client.workflow`, which is connected to the test server. #### Skip time automatically {#automatic-method} You can skip time automatically in the SDK of your choice. Start a test server process that skips time as needed. For example, in the time-skipping mode, Timers, which include sleeps and conditional timeouts, are fast-forwarded except when Activities are running. The test server starts in "normal" time. When you use `TestWorkflowEnvironment.client.workflow.execute()` or `.result()`, the test server switches to "skipped" time mode until the Workflow completes. In "skipped" mode, timers (`sleep()` calls and `condition()` timeouts) are fast-forwarded except when Activities are running. `workflows.ts` ```ts export async function sleeperWorkflow() { await sleep('1 day'); } ``` `test.ts` ```ts test('sleep completes almost immediately', async () => { const worker = await Worker.create({ connection: testEnv.nativeConnection, taskQueue: 'test', workflowsPath: require.resolve('./workflows'), }); // Does not wait an entire day await worker.runUntil( testEnv.client.workflow.execute(sleeperWorkflow, { workflowId: uuid(), taskQueue: 'test', }), ); }); ``` #### Skip time manually {#manual-method} Skip time manually in the SDK of your choice. You can call `testEnv.sleep()` from your test code to advance the test server's time. This is useful for testing intermediate states or indefinitely long-running Workflows. However, to use `testEnv.sleep()`, you need to avoid automatic time skipping by starting the Workflow with `.start()` instead of `.execute()` (and not calling `.result()`). `workflow.ts` ```ts export const daysQuery = defineQuery('days'); export async function sleeperWorkflow() { let numDays = 0; setHandler(daysQuery, () => numDays); for (let i = 0; i < 100; i++) { await sleep('1 day'); numDays++; } } ``` `test.ts` ```ts test('sleeperWorkflow counts days correctly', async () => { const worker = await Worker.create({ connection: testEnv.nativeConnection, taskQueue: 'test', workflowsPath: require.resolve('./workflows'), }); // `start()` starts the test server in "normal" mode, not skipped time mode. // If you don't advance time using `testEnv.sleep()`, then `sleeperWorkflow()` // will run for days. handle = await testEnv.client.workflow.start(sleeperWorkflow, { workflowId: uuid4(), taskQueue, }); worker.run(); let numDays = await handle.query(daysQuery); assert.equal(numDays, 0); // Advance the test server's time by 25 hours await testEnv.sleep('25 hours'); numDays = await handle.query(daysQuery); assert.equal(numDays, 1); await testEnv.sleep('25 hours'); numDays = await handle.query(daysQuery); assert.equal(numDays, 2); }); ``` #### Skip time in Activities {#skip-time-in-activities} Skip time in Activities in the SDK of your choice. Call [`TestWorkflowEnvironment.sleep`](https://typescript.temporal.io/api/classes/testing.TestWorkflowEnvironment#sleep) from the mock Activity. In the following test, `processOrderWorkflow` sends a notification to the user after one day. The `processOrder` mocked Activity calls `testEnv.sleep(‘2 days')`, during which the Workflow sends email (by calling the `sendNotificationEmail` Activity). Then, after the Workflow completes, we assert that `sendNotificationEmail` was called.
Workflow implementation [timer-examples/src/workflows.ts](https://github.com/temporalio/samples-typescript/blob/main/timer-examples/src/workflows.ts) ```ts export async function processOrderWorkflow({ orderProcessingMS, sendDelayedEmailTimeoutMS, }: ProcessOrderOptions): Promise { let processing = true; // Dynamically define the timeout based on given input const { processOrder } = proxyActivities>({ startToCloseTimeout: orderProcessingMS, }); const processOrderPromise = processOrder().then(() => { processing = false; }); await Promise.race([processOrderPromise, sleep(sendDelayedEmailTimeoutMS)]); if (processing) { await sendNotificationEmail(); await processOrderPromise; } return 'Order completed!'; } ```
[timer-examples/src/test/workflows.test.ts](https://github.com/temporalio/samples-typescript/blob/main/timer-examples/src/test/workflows.test.ts) ```ts it('sends reminder email if processOrder does not complete in time', async () => { // This test doesn't actually take days to complete: the TestWorkflowEnvironment starts the // Test Server, which automatically skips time when there are no running Activities. let emailSent = false; const mockActivities: ReturnType = { async processOrder() { // Test server switches to "normal" time while an Activity is executing. // Call `env.sleep` to skip ahead 2 days, by which time sendNotificationEmail // should have been called. await env.sleep('2 days'); }, async sendNotificationEmail() { emailSent = true; }, }; const worker = await Worker.create({ connection: env.nativeConnection, taskQueue: 'test', workflowsPath: require.resolve('../workflows'), activities: mockActivities, }); await worker.runUntil( env.client.workflow.execute(processOrderWorkflow, { workflowId: uuid(), taskQueue: 'test', args: [{ orderProcessingMS: ms('3 days'), sendDelayedEmailTimeoutMS: ms('1 day') }], }), ); assert.ok(emailSent); }); ``` ### Test functions in Workflow context {#workflow-context} For a function or method to run in the Workflow context (where it's possible to get the current Workflow info, or running inside the sandbox in the case of TypeScript or Python), it needs to be run by the Worker as if it were a Workflow. :::note This section is applicable in Python and TypeScript. In Python, we allow testing of Workflows only and not generic Workflow-related code. ::: To test a function in your Workflow code that isn't a Workflow, put the file it's exported from in [WorkerOptions.workflowsPath](https://typescript.temporal.io/api/interfaces/worker.WorkerOptions#workflowspath). Then execute the function as if it were a Workflow: `workflows/file-with-workflow-function-to-test.ts` ```ts export async function functionToTest(): Promise { await sleep('1 day'); return 42; } ``` `test.ts` ```ts const worker = await Worker.create({ connection: testEnv.nativeConnection, workflowsPath: require.resolve( './workflows/file-with-workflow-function-to-test', ), }); const result = await worker.runUntil( testEnv.client.workflow.execute(functionToTest, workflowOptions), ); assert.equal(result, 42); ``` If `functionToTest` starts a Child Workflow, that Workflow must be exported from the same file (so that the Worker knows about it): ```ts export { someWorkflowToRunAsChild }; export async function functionToTest(): Promise { const result = await wf.executeChild(someWorkflowToRunAsChild); return result + 42; } ``` ### Assert in Workflow {#assert-in-workflow} The `assert` statement is a convenient way to insert debugging assertions into the Workflow context. The `assert` method is available in Python and TypeScript. The Node.js [`assert`](https://nodejs.org/api/assert.html) module is included in Workflow bundles. By default, a failed `assert` statement throws `AssertionError`, which causes a [Workflow Task](/tasks#workflow-task) to fail and be indefinitely retried. To prevent this behavior, use [`workflowInterceptorModules`](https://typescript.temporal.io/api/namespaces/testing/#workflowinterceptormodules) from `@temporalio/testing`. These interceptors catch an `AssertionError` and turn it into an `ApplicationFailure` that fails the entire Workflow Execution (not just the Workflow Task). `workflows/file-with-workflow-function-to-test.ts` ```ts export async function functionToTest() { assert.ok(false); } ``` `test.ts` ```ts TestWorkflowEnvironment, workflowInterceptorModules, } from '@temporalio/testing'; const worker = await Worker.create({ connection: testEnv.nativeConnection, interceptors: { workflowModules: workflowInterceptorModules, }, workflowsPath: require.resolve( './workflows/file-with-workflow-function-to-test', ), }); await worker.runUntil( testEnv.client.workflow.execute(functionToTest, workflowOptions), // throws WorkflowFailedError ); ``` ## How to Replay a Workflow Execution {#replay} Replay recreates the exact state of a Workflow Execution. You can replay a Workflow from the beginning of its Event History. Replay succeeds only if the [Workflow Definition](/workflow-definition) is compatible with the provided history from a deterministic point of view. When you test changes to your Workflow Definitions, we recommend doing the following as part of your CI checks: 1. Determine which Workflow Types or Task Queues (or both) will be targeted by the Worker code under test. 2. Download the Event Histories of a representative set of recent open and closed Workflows from each Task Queue, either programmatically using the SDK client or via the Temporal CLI. 3. Run the Event Histories through replay. 4. Fail CI if any error is encountered during replay. The following are examples of fetching and replaying Event Histories: To replay a single Event History, use [worker.runReplayHistory](https://typescript.temporal.io/api/classes/worker.Worker#runreplayhistory). When an Event History is replayed and non-determinism is detected (that is, the Workflow code is incompatible with the History), [DeterminismViolationError](https://typescript.temporal.io/api/classes/workflow.DeterminismViolationError) is thrown. If replay fails for any other reason, [ReplayError](https://typescript.temporal.io/api/classes/worker.ReplayError) is thrown. In the following example, a single Event History is loaded from a JSON file on disk (as obtained from the [Web UI](/web-ui) or the [Temporal CLI](/cli/workflow#show)): ```ts const filePath = './history_file.json'; const history = await JSON.parse(fs.promises.readFile(filePath, 'utf8')); await Worker.runReplayHistory( { workflowsPath: require.resolve('./your/workflows'), }, history, ); ``` Alternatively, we can download the Event History programmatically using a Client: ```ts const connection = await Connection.connect({ address }); const client = new Client({ connection, namespace: 'your-namespace' }); const handle = client.workflow.getHandle('your-workflow-id'); const history = await handle.fetchHistory(); await Worker.runReplayHistory( { workflowsPath: require.resolve('./your/workflows'), }, history, ); ``` To gain confidence that changes to a Workflow are safe to deploy, we recommend that you obtain Event Histories from the relevant Task Queue and replay them in bulk. You can do so by combining the [Client.workflow.list()](https://typescript.temporal.io/api/classes/client.WorkflowClient#list) and [worker.runReplayHistories()](https://typescript.temporal.io/api/classes/worker.Worker#runreplayhistories) APIs. In the following example (which, as of server 1.18, requires [Advanced Visibility](/visibility#advanced-visibility) to be enabled), Event Histories are downloaded from the server and then replayed by passing in a client and a set of Workflows Executions. The [results](https://typescript.temporal.io/api/interfaces/worker.ReplayResult) returned by the async iterator contain information about the Workflow Execution and whether an error occurred during replay. ```ts const executions = client.workflow.list({ query: 'TaskQueue=foo and StartTime > "2022-01-01T12:00:00"', }); const histories = executions.intoHistories(); const results = Worker.runReplayHistories( { workflowsPath: require.resolve('./your/workflows'), }, histories, ); for await (const result of results) { if (result.error) { console.error('Replay failed', result); } } ``` --- ## Durable Timers - TypeScript SDK ## What is a Timer? {#timers} A Workflow can set a durable Timer for a fixed time period. In some SDKs, the function is called `sleep()`, and in others, it's called `timer()`. A Workflow can sleep for months. Timers are persisted, so even if your Worker or Temporal Service is down when the time period completes, as soon as your Worker and Temporal Service are back up, the `sleep()` call will resolve and your code will continue executing. Sleeping is a resource-light operation: it does not tie up the process, and you can run millions of Timers off a single Worker. ## Asynchronous design patterns in TypeScript {#asynchronous-design-patterns} The real value of `sleep` and `condition` is in knowing how to use them to model asynchronous business logic. Here are some examples we use the most; we welcome more if you can think of them!
Racing Timers Use `Promise.race` with Timers to dynamically adjust delays. ```ts export async function processOrderWorkflow({ orderProcessingMS, sendDelayedEmailTimeoutMS, }: ProcessOrderOptions): Promise { let processing = true; const processOrderPromise = processOrder(orderProcessingMS).then(() => { processing = false; }); await Promise.race([processOrderPromise, sleep(sendDelayedEmailTimeoutMS)]); if (processing) { await sendNotificationEmail(); await processOrderPromise; } } ```
Racing Signals Use `Promise.race` with Signals and Triggers to have a promise resolve at the earlier of either system time or human intervention. ```ts const userInteraction = new Trigger(); const completeUserInteraction = defineSignal('completeUserInteraction'); export async function yourWorkflow(userId: string) { setHandler(completeUserInteraction, () => userInteraction.resolve(true)); // programmatic resolve const userInteracted = await Promise.race([ userInteraction, sleep('30 days'), ]); if (!userInteracted) { await sendReminderEmail(userId); } } ``` You can invert this to create a reminder pattern where the promise resolves _if_ no Signal is received. :::caution Antipattern: Racing sleep.then Be careful when racing a chained `sleep`. This might cause bugs because the chained `.then` will still continue to execute. ```ts await Promise.race([ sleep('5s').then(() => (status = 'timed_out')), somethingElse.then(() => (status = 'processed')), ]); if (status === 'processed') await complete(); // takes more than 5 seconds // status = timed_out ``` :::
Updatable Timer Here is how you can build an updatable Timer with `condition`: ```ts // usage export async function countdownWorkflow(): Promise { const target = Date.now() + 24 * 60 * 60 * 1000; // 1 day!!! const timer = new UpdatableTimer(target); console.log('timer set for: ' + new Date(target).toString()); wf.setHandler(setDeadlineSignal, (deadline) => { // send in new deadlines via Signal timer.deadline = deadline; console.log('timer now set for: ' + new Date(deadline).toString()); }); wf.setHandler(timeLeftQuery, () => timer.deadline - Date.now()); await timer; // if you send in a signal with a new time, this timer will resolve earlier! console.log('countdown done!'); } ``` This is available in the third-party package [`temporal-time-utils`](https://www.npmjs.com/package/temporal-time-utils#user-content-updatabletimer), where you can also see the implementation: ```ts // implementation export class UpdatableTimer implements PromiseLike { deadlineUpdated = false; #deadline: number; constructor(deadline: number) { this.#deadline = deadline; } private async run(): Promise { /* eslint-disable no-constant-condition */ while (true) { this.deadlineUpdated = false; if ( !(await wf.condition( () => this.deadlineUpdated, this.#deadline - Date.now(), )) ) { break; } } } then( onfulfilled?: (value: void) => TResult1 | PromiseLike, onrejected?: (reason: any) => TResult2 | PromiseLike, ): PromiseLike { return this.run().then(onfulfilled, onrejected); } set deadline(value: number) { this.#deadline = value; this.deadlineUpdated = true; } get deadline(): number { return this.#deadline; } } ```
--- ## Versioning - TypeScript SDK Since Workflow Executions in Temporal can run for long periods — sometimes months or even years — it's common to need to make changes to a Workflow Definition, even while a particular Workflow Execution is in progress. The Temporal Platform requires that Workflow code is [deterministic](/workflow-definition#deterministic-constraints). If you make a change to your Workflow code that would cause non-deterministic behavior on Replay, you'll need to use one of our Versioning methods to gracefully update your running Workflows. With Versioning, you can modify your Workflow Definition so that new executions use the updated code, while existing ones continue running the original version. There are two primary Versioning methods that you can use: - [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). The Worker Versioning feature allows you to tag your Workers and programmatically roll them out in versioned deployments, so that old Workers can run old code paths and new Workers can run new code paths. - [Versioning with Patching](#patching). This method works by adding branches to your code tied to specific revisions. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. :::danger Support for the experimental Worker Versioning method before 2025 will be removed from Temporal Server in March 2026. Refer to the [latest Worker Versioning docs](/worker-versioning) for guidance. You can still refer to the [Worker Versioning Legacy](worker-versioning-legacy) docs if needed. ::: ## Versioning with Patching {#patching} ### Adding a patch A Patch defines a logical branch in a Workflow for a specific change, similar to a feature flag. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. When you want to make substantive code changes that may affect existing Workflow Executions, create a patch. Note that there's no need to patch [Pinned Workflows](/worker-versioning). Suppose you have an initial Workflow that runs `activityA`: ```ts // v1 export async function myWorkflow(): Promise { await activityA(); await sleep('1 days'); // arbitrary long sleep to simulate a long running workflow we need to patch await activityThatMustRunAfterA(); } ``` Now, you want to update your code to run `activityB` instead. This represents your desired end state. ```ts // vFinal export async function myWorkflow(): Promise { await activityB(); await sleep('1 days'); } ``` The problem is that you cannot deploy this `vFinal` revision directly until you're certain there are no more running Workflows created using the `v1` code, otherwise you are likely to cause a nondeterminism error. Instead, you'll need to use the [`patched`](https://typescript.temporal.io/api/namespaces/workflow#patched) function to check which version of the code should be executed. Patching is a three-step process: 1. Patch in any new, updated code using the `patched()` function. Run the new patched code alongside old code. 2. Remove old code and use `deprecatePatch()` to mark a particular patch as deprecated. 3. Once there are no longer any open Worklow Executions of the previous version of the code, remove `deprecatePatch()`. Let's walk through this process in sequence. ### Patching in new code Using `patched` inserts a marker into the Workflow History. During Replay, if a Worker encounters a history with that marker, it will fail the Workflow task when the Workflow code doesn't produce the same patch marker (in this case `your-change-id`). This ensures you can safely deploy code from `v2` as a "feature flag" alongside the original version (`v1`). ```ts // v2 export async function myWorkflow(): Promise { if (patched('my-change-id')) { await activityB(); await sleep('1 days'); } else { await activityA(); await sleep('1 days'); await activityThatMustRunAfterA(); } } ``` ### Deprecating patches {#deprecated-patches} After ensuring that all Workflows started with `v1` code have left retention, you can [deprecate the patch](https://typescript.temporal.io/api/namespaces/workflow#deprecatepatch). Once your Workflows are no longer running the pre-patch code paths, you can deploy your code with `deprecatePatch()`. These Workers will be running the most up-to-date version of the Workflow code, which no longer requires the patch. Deprecated patches serve as a bridge between the final stage of the patching process and the final state that no longer has patches. They function similarly to regular patches by adding a marker to the Workflow History. However, this marker won't cause a replay failure when the Workflow code doesn't produce it. ```ts // v3 export async function myWorkflow(): Promise { deprecatePatch('my-change-id'); await activityB(); await sleep('1 days'); } ``` ### Removing a patch {#deploy-new-code} Once your pre-patch Workflows have left retention, you can then safely deploy Workers that no longer use either the `patched()` or `deprecatePatch()` calls: Patching allows you to make changes to currently running Workflows. It is a powerful method for introducing compatible changes without introducing non-determinism errors. ### Workflow cutovers To understand why Patching is useful, it's helpful to demonstrate cutting over an entire Workflow. Since incompatible changes only affect open Workflow Executions of the same type, you can avoid determinism errors by creating a whole new Workflow when making changes. To do this, you can copy the Workflow Definition function, giving it a different name, and register both names with your Workers. For example, you would duplicate `PizzaWorkflow` as `PizzaWorkflowV2`: ```typescript function pizzaWorkflow(order: PizzaOrder): Promise { // this function contains the original code } function pizzaWorkflowV2(order: PizzaOrder): Promise { // this function contains the updated code } ``` You would then need to update the Worker configuration, and any other identifier strings, to register both Workflow Types: ```typescript const worker = await Worker.create({ workflowsPath: require.resolve('./workflows'), // other configurations }); ``` The downside of this method is that it requires you to duplicate code and to update any commands used to start the Workflow. This can become impractical over time. This method also does not provide a way to version any still-running Workflows -- it is essentially just a cutover, unlike Patching. ### Testing a Workflow for replay safety To determine whether your Workflow your needs a patch, or that you've patched it successfully, you should incorporate [Replay Testing](/develop/typescript/testing-suite#replay). ## Worker Versioning Temporal's [Worker Versioning](/production-deployment/worker-deployments/worker-versioning) feature allows you to tag your Workers and programmatically roll them out in Deployment Versions, so that old Workers can run old code paths and new Workers can run new code paths. This way, you can pin your Workflows to specific revisions, avoiding the need for patching. --- ## Worker Versioning (Legacy) - Typescript SDK ## How to use Worker Versioning in TypeScript (Deprecated) {#worker-versioning} :::caution This section is for a deprecated Worker Versioning API. Please redirect your attention to [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). See the [Pre-release README](https://github.com/temporalio/temporal/blob/main/docs/worker-versioning.md) for more information. ::: A Build ID corresponds to a deployment. If you don't already have one, we recommend a hash of the code--such as a Git SHA--combined with a human-readable timestamp. To use Worker Versioning, you need to pass a Build ID to your Typescript Worker and opt in to Worker Versioning. ### Assign a Build ID to your Worker and opt in to Worker Versioning You should understand assignment rules before completing this step. See the [Worker Versioning Pre-release README](https://github.com/temporalio/temporal/blob/main/docs/worker-versioning.md) for more information. To enable Worker Versioning for your Worker, assign the Build ID--perhaps from an environment variable--and turn it on. ```typescript // ... const worker = await Worker.create({ taskQueue: 'your_task_queue_name', buildId: buildId, useVersioning: true, // ... }); // ... ``` :::warning Importantly, when you start this Worker, it won't receive any tasks until you set up assignment rules. ::: ### Specify versions for Activities, Child Workflows, and Continue-as-New Workflows :::caution This section is for a deprecated Worker Versioning API. Please redirect your attention to [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). ::: By default, Activities, Child Workflows, and Continue-as-New Workflows are run on the build of the Workflow that created them if they are also configured to run on the same Task Queue. When configured to run on a separate task queue, they will default to using the current assignment rules. If you want to override this behavior, you can specify your intent via the `versioningIntent` field available on the options object for each of these commands. For example, if you want an Activity to use the latest assignment rules rather than inheriting from its parent: ```typescript // ... const { echo } = proxyActivities({ startToCloseTimeout: '20s', versioningIntent: 'USE_ASSIGNMENT_RULES', }); // ... ``` ### Tell the Task Queue about your Worker's Build ID (Deprecated) :::caution This section is for a deprecated Worker Versioning API. Please redirect your attention to [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). ::: Now you can use the SDK (or the Temporal CLI) to tell the Task Queue about your Worker's Build ID. You might want to do this as part of your CI deployment process. ```typescript // ... await client.taskQueue.updateBuildIdCompatibility('your_task_queue_name', { operation: 'addNewIdInNewDefaultSet', buildId: 'deadbeef', }); ``` This code adds the `deadbeef` Build ID to the Task Queue as the sole version in a new version set, which becomes the default for the queue. New Workflows execute on Workers with this Build ID, and existing ones will continue to process by appropriately compatible Workers. If, instead, you want to add the Build ID to an existing compatible set, you can do this: ```typescript // ... await client.taskQueue.updateBuildIdCompatibility('your_task_queue_name', { operation: 'addNewCompatibleVersion', buildId: 'deadbeef', existingCompatibleBuildId: 'some-existing-build-id', }); ``` This code adds `deadbeef` to the existing compatible set containing `some-existing-build-id` and marks it as the new default Build ID for that set. You can promote an existing Build ID in a set to be the default for that set: ```typescript // ... await client.taskQueue.updateBuildIdCompatibility('your_task_queue_name', { operation: 'promoteBuildIdWithinSet', buildId: 'deadbeef', }); ``` You can promote an entire set to become the default set for the queue. New Workflows will start using that set's default build. ```typescript // ... await client.taskQueue.updateBuildIdCompatibility('your_task_queue_name', { operation: 'promoteSetByBuildId', buildId: 'deadbeef', }); ``` You can merge two sets into one, preserving the primary set's default Build ID as the default for the merged set. ```typescript // ... await client.taskQueue.updateBuildIdCompatibility('your_task_queue_name', { operation: 'mergeSets', primaryBuildId: 'deadbeef', secondaryBuildId: 'some-existing-build-id', }); ``` --- ## Worker performance This page documents metrics and configurations that drive the efficiency of your Worker fleet. It provides coverage of performance metric families, Worker configuration options, Task Queue information, backlog counts, Task rates, and how to evaluate Worker availability. This content covers practical methods for querying Task Queue information, and strategies for tuning Workers and Task Queue processing so you manage your resources effectively. :::info All metrics on this page are prepended with the `temporal_` prefix. For example, `worker_task_slots_available` is actually `temporal_worker_task_slots_available` when used. The omitted prefix makes the names more readable and descriptive. ::: ## Worker performance concepts {#worker-performance-concepts} A Worker's performance characteristics are affected by, but not limited to, the following elements. ### Task slots {#slots} A **Worker Task Slot**, represents the capacity of a Temporal Worker to execute a single concurrent Task. Slots are crucial for managing the workload and performance of Workers in a Temporal application. They're used for both Workflow and Activity Tasks. When a Worker starts processing a Task, it occupies one slot. The number of available slots directly affects how many tasks a Worker can handle simultaneously. ### Slot suppliers {#slot-suppliers} A **Slot Supplier** defines a strategy to provide slots for a Worker, increasing or decreasing the Worker's slot count. The supplier determines when it's acceptable to begin a new Task. Each supplier manages one slot type. There are slot types for Activity, Workflow, Nexus, or Local Activity Tasks. An available slot determines whether or not a Worker is willing to poll for, and execute, a new Task of that type. Slot supplier strategies include manual assignment of fixed slot counts and resource-balanced "auto-tuner" assignment. Resource-based suppliers adjust slot counts based on CPU and memory resources. Available slot suppliers include: - **Fixed Size Slot Suppliers**: Hands out slots up to a preset limit. This is useful if you have a concrete idea of how many resources your tasks are going to consume, and can easily determine an upper bound on how many should run at once. When you need the absolute best performance, review your hardware and environment characteristics. This information lets you calculate an appropriate fixed-size limit. Evaluate the maximum number of slots you can support without oversubscribing or hitting out-of-memory conditions ("OOMing"). Using that value with a fixed-size supplier provides optimal results with the least overhead. - **Resource-Based Slot Suppliers**: Hands out slots based on real-time CPU and memory usage. You set target utilization for both CPU and memory and the Slot Supplier tries to reach those values without exceeding them under load. A resource-based supplier will account for memory limits imposed in containerized environments. It dynamically adjusts the number of available slots for different task types with respect to current system resources. - **Custom Slot Suppliers**: Hands out slots based on the custom logic that you define. Use this approach when you need complete control over when Workers accept and execute Tasks. For implementation details, see [Implement Custom Slot Suppliers](#custom-slot-implementation). :::caution - You cannot guarantee that the targets for resource-based suppliers won't ever be exceeded. Resources consumed during a task can't be known ahead of time. - Worker tuners supersede the existing `maxConcurrentXXXTask` style Worker options. Using both styles will cause an error at Worker initialization time. ::: ### Worker tuning {#worker-tuning} Worker tuning is the process of defining customized slot suppliers for the different task slots of a Worker to fine-tune its performance. You use special types called **Worker tuners** that assign slot suppliers to various Task Types, including Worker, Activity, Nexus, and Local Activity Tasks. For more on how to configure and use Worker tuners, refer to [Worker runtime performance tuning](#worker-performance-tuning). :::caution Worker tuners supersede the existing `maxConcurrentXXXTask` style Worker options. Using both styles will cause an error at Worker initialization time. ::: ### Task Pollers A Worker's **Task Pollers** play a crucial role in the Temporal architecture by efficiently ingesting work to Workers to support scalable, resilient Workflow Execution. Pollers create long-polling connections to the Temporal Service and actively poll a Task Queue for Tasks to process. When a Task Poller receives a Task, it delivers the Task to the appropriate Executor Slot for processing. Temporal SDKs implement support for *Poller Autoscaling*, which dynamically adjusts the number of pollers in use to maximize throughput for a given number of workers and the size of the task backlog. Temporal recommends using Poller Autoscaling for the majority of use cases, as manually setting the number of pollers too high or too low for your workload will result in decreased performance. To configure Poller Autoscaling, see [Configuring Poller Options](#configuring-poller-options) and samples for each Temporal SDK. ### Eager task execution :::caution Eager start does not respect Worker versioning. An eagerly started Workflow may run on any available local Worker even if that Worker is not the Current or Ramping version of its Worker deployment. ::: As a latency optimization, Activity and Workflow Tasks may be started eagerly in a local Worker under the right circumstances. #### Eager Activity Start Eager Activity Start may happen automatically if the Worker processing a Workflow Task has also registered the Activity Definition being called. If it does, it may try to reserve an Activity Slot for the execution of the Activity, and the server may respond to the Workflow Task completion with the Activity Task for the worker to execute immediately. #### Eager Workflow Start :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Eager Workflow Start is available in [Public Preview](/evaluate/development-production-features/release-stages#public-preview) in the Go, Java, Python, and .NET SDKs. Temporal Cloud and Temporal Server 1.29.0 and higher have Eager Workflow Start available for use by default, but you must explicitly Request-Eager-Start when starting a Workflow. ::: Eager Workflow Start reduces the latency required to initiate a Workflow execution. It is recommended for short-lived Workflows that use Local Activities to interact with external services, especially when these interactions are initiated in the first Workflow Task and the Workflow is deployed near the Temporal Server to minimize network delay. This feature is particularly beneficial for Workflows with a “happy path” that must begin external interactions within tens of milliseconds, while still relying on Temporal’s server-driven retries and compensation mechanisms to ensure reliability in failure scenarios. **Quick Start** Eager Workflow Start requires the Starter and the Worker to share a Client located in the same process and setting the `request_eager_start` (or similar name) to true in the Start Workflow call. When set, and the Worker has a Workflow Task slot available and the Workflow Definition registered, the Worker can execute the first task of the Workflow locally without first making a round-trip to the Temporal Server. This is typically most useful in combination with a Local Activity executing in the first Workflow Task, since other Workflow API calls that require waiting on something will force a round-trip. :::tip RESOURCES - [Go SDK - Code sample](https://github.com/temporalio/samples-go/tree/main/eager-workflow-start) - [Java SDK - Code sample](https://github.com/temporalio/samples-java/blob/main/core/src/main/java/io/temporal/samples/hello/HelloEagerWorkflowStart.java) - Python SDK - use `request_eager_start` when calling `start_workflow` or `execute_workflow` - .NET SDK - use `RequestEagerStart` in your `WorkflowOptions` when starting a workflow - [Blog: Improving Latency with Eager Workflow Start](https://temporal.io/blog/improving-latency-with-eager-workflow-start) ::: **How it works** The traditional way to start a Workflow decouples the starter program from the worker by sharing a Task Queue name between them, similar to a publish/subscribe pattern. This has many advantages: for example, we can reliably schedule a Workflow Execution without a running Worker, or separate the Worker and Workflow implementation from the Starter application and host them independently. But decoupling also makes it harder to optimize for latency. Instead, when the **Starter and Worker are collocated in the same process** and aware of each other, they can interact while bypassing the server, saving a few time-intensive operations. The above figure shows Eager Workflow Start in action: 1. The process begins with the Starter setting `request_eager_start` (or similar name) to true in the Start Workflow Options. 1. The SDK will try to locate a local Worker that is willing to execute the first Workflow Task, and reserve an execution slot for it. 1. If successful, the SDK will provide a hint to the server that eager mode is preferred for the new Workflow. 1. The server not only registers the start of the Workflow in history, it also assigns the first Workflow Task to the Starter, all in the same DB update. 1. The first task is included in the server response, no matching step required. 1. The SDK extracts the task from the response, and dispatches it to the local worker. To recover from errors, Eager Workflow Start falls back to the non-eager path. For example, when the first Task is returned eagerly, but the local Worker fails or times out while processing the task, the server retries this task non-eagerly after WorkflowTaskTimeout. ## Performance metrics for tuning {#metrics} The Temporal SDKs emit metrics from Temporal Client usage and Worker Processes. Performance tuning uses three important SDK metric groups: ### Slot availability metrics Temporal's [`worker_task_slots_available`](/references/sdk-metrics#worker_task_slots_available) and `worker_task_slots_used` gauges can report the number of available executor “slots” that are currently available and unoccupied for a Worker type. Tag these with `worker_type=WorkflowWorker` for Workflow Task Workers or `worker_type=ActivityWorker` for Activity Workers. :::tip Unlike `worker_task_slots_used`, `worker_task_slots_available` can only be used with fixed size slot suppliers and can't be used with resource-based slot suppliers. ::: ### Latency metrics Temporal provides two latency timers: [`workflow_task_schedule_to_start_latency`](/references/sdk-metrics#workflow_task_schedule_to_start_latency) for Workflow Tasks and [`activity_schedule_to_start_latency`](/references/sdk-metrics#activity_schedule_to_start_latency) for Activities. A Schedule-To-Start latency is the time from when an Task is scheduled (that is, placed in a Queue) to when a Worker starts (that is, picks up from the Task Queue) that Task. These metrics help ensure that Tasks are being processed from the queue in a timely manner. For more information about `schedule_to_start` timeout and latency, see [Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout). ### Cache metrics The [`sticky_cache_size`](/references/sdk-metrics#sticky_cache_size) and [`workflow_active_thread_count`](/references/sdk-metrics#workflow_active_thread_count) metrics report the size of the Workflow cache and the number of cached Workflow threads. ## Worker performance options {#configuration} Each Worker can be configured by providing custom Worker options (`WorkerOptions`) at instantiation. Options are specific to individual Workers and do not affect other members of your fleet. ### Executor slot options The `maxConcurrentWorkflowTaskExecutionSize` and `maxConcurrentActivityExecutionSize` options define the number of total available Workflow Task and Activity Task slots for a Worker. :::caution - Worker tuners supersede the existing `maxConcurrentXXXTask` style Worker options. Using both styles will cause an error at Worker initialization time. ::: ### Configuring Poller Options #### Recommended Approach The Temporal SDKs support Poller Autoscaling, which automatically selects an appropriate number of pollers based on need. Using this feature results in more efficient poller usage, better throughput, and schedule-to-start latency improvements. You can enable this feature by setting the `*_task_poller_behavior` options to `PollerBehaviorAutoscaling`. Names may vary slightly depending on the SDK. For specific examples of enabling Poller Autoscaling, see the SDK Examples section below. Poller Autoscaling will be the default configuration in future versions of Temporal SDKs. #### Manual Configuration There are options available to manually configure minimum, maximum, and initial poller counts, but it is not recommended to set these values manually for production use cases. To set these values manually, the following options are available: - `maxConcurrentWorkflowTaskPollers` (in the JavaSDK: `workflowPollThreadCount`) - `maxConcurrentActivityTaskPollers` (in the JavaSDK: `activityPollThreadCount`) These options define the maximum count of pollers performing poll requests on Workflow and Activity Task Queues, respectively. #### SDK Examples [Go SDK docs](https://pkg.go.dev/go.temporal.io/sdk/worker#PollerBehaviorAutoscalingOptions) ```go w := worker.New(c, "my-task-queue", worker.Options{ WorkflowTaskPollerBehavior: worker.NewPollerBehaviorAutoscaling(worker.PollerBehaviorAutoscalingOptions{}), ActivityTaskPollerBehavior: worker.NewPollerBehaviorAutoscaling(worker.PollerBehaviorAutoscalingOptions{}), NexusTaskPollerBehavior: worker.NewPollerBehaviorAutoscaling(worker.PollerBehaviorAutoscalingOptions{}), }) ``` [Java SDK docs](https://javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/worker/tuning/PollerBehaviorAutoscaling.html) ```java public class WorkerExample { public static void main(String[] args) { WorkflowServiceStubs service = WorkflowServiceStubs.newLocalServiceStubs(); WorkflowClient client = WorkflowClient.newInstance(service); WorkerFactory factory = WorkerFactory.newInstance(client); WorkerOptions workerOptions = WorkerOptions.newBuilder() .setWorkflowTaskPollersBehavior(new PollerBehaviorAutoscaling()) .setActivityTaskPollersBehavior(new PollerBehaviorAutoscaling()) .setNexusTaskPollersBehavior(new PollerBehaviorAutoscaling()) .build(); Worker worker = factory.newWorker("my-task-queue", workerOptions); } } ``` [Python SDK docs](https://python.temporal.io/temporalio.worker.PollerBehaviorAutoscaling.html) ```python worker = Worker( client, task_queue="my-task-queue", workflows=[MyWorkflow], activities=[my_activity], workflow_task_poller_behavior=PollerBehaviorAutoscaling(), activity_task_poller_behavior=PollerBehaviorAutoscaling(), nexus_task_poller_behavior=PollerBehaviorAutoscaling(), ) ``` [TypeScript SDK docs](https://typescript.temporal.io/api/interfaces/proto.temporal.api.sdk.v1.WorkerConfig.IAutoscalingPollerBehavior) ```ts const worker = await Worker.create({ connection, taskQueue: 'my-task-queue', workflowsPath: require.resolve('./workflows'), activities, workflowTaskPollerBehavior: PollerBehavior.autoscaling(), activityTaskPollerBehavior: PollerBehavior.autoscaling(), nexusTaskPollerBehavior: PollerBehavior.autoscaling(), }); ``` [DotNet SDK docs](https://dotnet.temporal.io/api/Temporalio.Worker.Tuning.PollerBehavior.Autoscaling.html) ```csharp using var worker = new TemporalWorker( client, new TemporalWorkerOptions("my-task-queue") { WorkflowTaskPollerBehavior = new PollerBehavior.Autoscaling(), ActivityTaskPollerBehavior = new PollerBehavior.Autoscaling(), NexusTaskPollerBehavior = new PollerBehavior.Autoscaling(), } .AddWorkflow() .AddActivity(MyActivities.MyActivity) ); ``` [Ruby SDK docs](https://ruby.temporal.io/Temporalio/Worker/PollerBehavior/Autoscaling.html) ```ruby worker = Temporalio::Worker.new( client, 'my-task-queue', workflows: [MyWorkflow], activities: [MyActivity], workflow_task_poller_behavior: Temporalio::Worker::PollerBehaviorAutoscaling.new, activity_task_poller_behavior: Temporalio::Worker::PollerBehaviorAutoscaling.new, nexus_task_poller_behavior: Temporalio::Worker::PollerBehaviorAutoscaling.new, ) ``` ### Cache options A Workflow Cache is created and shared between all Workers on a single host. It's designed to limit the resources used by the cache for each host/process. These options are defined on `WorkerFactoryOptions` in JavaSDK and in `worker` package in GoSDK: - `worker.setStickyWorkflowCacheSize` (JavaSDK: `WorkerFactoryOptions#workflowCacheSize`) defines the maximum number of cached Workflows Executions. Each cached Workflow contains at least one Workflow thread and its resources. Resources include memory, etc. - `maxWorkflowThreadCount` defines the maximum number of Workflow threads that may exist concurrently at any time. These cache options limit the resource consumption of the in-memory Workflow cache. Workflow cache options are shared between all Workers because the Workflow cache is tightly integrated with the resource consumption of the entire host. This includes memory and the total thread count, which should be limited per host/JVM. ### "Large value" drawbacks There are drawbacks when you use "large values everywhere." As with any multithreading system, specifying excessively large values without monitoring with the SDK and system metrics leads to constant resource contention/stealing This decreases the total throughput and increases latency jitter of the system. ### Invariants (JavaSDK only) {#invariants} These properties should always be true for a Worker's configuration. Perform this sanity check after the adjustments to Worker settings. 1. `workflowCacheSize` should be ≤ `maxWorkflowThreadCount`. Each Workflow has at least one Workflow thread. 2. `maxConcurrentWorkflowTaskExecutionSize` should be ≤ `maxWorkflowThreadCount`. Having more Worker slots than the Workflow cache size will lead to resource contention/stealing between executors and unpredictable delays. It's recommended that `maxWorkflowThreadCount` be at least 2x of `maxConcurrentWorkflowTaskExecutionSize`. 3. `maxConcurrentWorkflowTaskPollers` should be significantly ≤ `maxConcurrentWorkflowTaskExecutionSize`. And `maxConcurrentActivityTaskPollers` should be significantly ≤ `maxConcurrentActivityExecutionSize`. The number of pollers should always be lower than the number of executors. ## Worker runtime performance tuning {#worker-performance-tuning} Worker tuning manages the assignment of slot suppliers. A **Worker Tuner** instance exists per-Worker, providing slot suppliers for different slot types (Activity, Workflow, Nexus, or Local Activity Tasks). A tuner assigns different suppliers to each slot type. For example, it might provide a fixed assignment slot supplier for Workflows and use a resource-based supplier for Activities. ### Choosing slot supplier types Temporal offers three types of slot suppliers: fixed assignment, resource-based, and custom. Here’s how to choose the best approach based on your system requirements and workload characteristics. When choosing whether to opt for fixed assignment or resource-based suppliers, consider: - Workflow Tasks make minimal demands on the CPU and, normally, do not consume much memory. They are well-served by fixed-sized slot suppliers. - When very low Task completion latency is a concern, avoid resourced-based auto-tuning slot suppliers. - Reserve auto-tuned resource-based slot suppliers for deployments focused on avoiding Worker overload. They provide excellent balance with built-in throttling that ensures the Worker will be cautious when handing out new executor slots. The following use cases are particularly well suited to resource-based auto-tuning slot suppliers: - **Fluctuating workloads with low per-Task consumption**: The resource-based supplier works well when each Task consumes few resources but may run for a (relatively) long time. For example: HTTP calls or other blocking I/Os that spend most of their time waiting on external events. - **Protection from out-of-memory & over-subscription in the face of unpredictable per-task consumption:** Do your Tasks often consume an unpredictable number of resources? Do you want to avoid crashes without setting an overly-conservative fixed limit? In these cases, the resource-based supplier is a good match. Keep in mind that auto-tuning can never do a _perfect_ job and may sometimes exceed your requested system limits for CPU and memory. For the highest level of control over slot allocation, consider custom slot suppliers. This allows you to tailor the logic of how slots are allocated based on your system requirements. Custom suppliers provide flexibility to optimize for specific use cases that fixed assignment and resource-based suppliers may not fully address. Choosing the right slot supplier depends on your workload complexity and the control you need over resource allocation. For predictable tasks, variable workloads, or complex dynamic scenarios, Temporal slot suppliers can meet your needs. ### Implement Custom Slot Suppliers {#custom-slot-implementation} Implement your own Slot Supplier to control how Workers are allocated Tasks and manage the processing of Workflows, Activities, and Nexus Operations. Custom Slot Suppliers let you fine-tune task processing based on your application's needs. Each SDK's reference documentation explains the specifics of the interface, but the core concepts are consistent across SDKs: | Language | Slot Supplier Reference | | ------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------ | | | [`SlotSupplier`](https://pkg.go.dev/go.temporal.io/sdk/worker#SlotSupplier) | | | [`SlotSupplier`](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/worker/tuning/SlotSupplier.html) | | | [`CustomSlotSupplier`](https://python.temporal.io/temporalio.worker.CustomSlotSupplier.html) | | | [`CustomSlotSupplier`](https://typescript.temporal.io/api/interfaces/worker.CustomSlotSupplier) | | | [`CustomSlotSupplier`](https://dotnet.temporal.io/api/Temporalio.Worker.Tuning.CustomSlotSupplier.html) | Slot Suppliers issue `SlotPermit`s. These represent the right to use a slot of a specific type, namely Workflow, Activity, Local Activity, or Nexus. You control whether a Worker can perform certain tasks by issuing or withholding permits. Custom Slot Suppliers must implement these functions: - `reserveSlot` - Called before polling for new tasks. Your implementation can block and must return a Slot Permit once it decides to accept new work. - `tryReserveSlot` - Called for slot reservations in cases like eager activity processing. This must not block. - `markSlotUsed` - Called when a slot is about to be used for a task (not while it’s held during polling). It provides information about the task. - `releaseSlot` - Called when a slot is no longer needed, whether or not it was used. Custom policies require more effort, but provide finer control over Task processing. By implementing your own Slot Supplier, you can tailor how Workflows, Activities, and Nexus Operations are handled, optimizing performance for your specific needs. ### Slot supplier throttles Auto-tuned suppliers may diverge from requested thresholds. The resources a given Task will use can't be known ahead of time. There is a fundamental tradeoff between how quickly a slot supplier is willing to accept Tasks and how well it can respect the defined thresholds. Slot throttling is a mechanism to control the rate at which new slots for concurrent tasks are made available for processing. This concept is part of the resource-based auto-tuning feature for Workers. By waiting a brief period between making slots available, the Worker can assess how resource usage has changed since the last task began processing This throttle is called `rampThrottle` in the SDK options for resource-based slot suppliers. It defines the minimum time the Worker will wait between handing out new slots after passing the minimum slots number. If a just-started worker were to have no throttle, and there was a backlog of Tasks, it might immediately accept 100 Tasks at once. If each Task allocated 1GB of RAM, the Worker would likely run out of memory and crash. The throttle enforces a wait before handing out new slots (after a minimum number of slots have been occupied) so you can measure newly consumed resources. ## Performance tuning examples {#examples} The following examples show how to create and provision composite Worker tuners and set other performance related options. Each tuner provides slot suppliers for various Task types. These examples focus on Activities and Local Activities, since Workflow Tasks normally do not need resource-based tuning. ### Go SDK ```go // Using the ResourceBasedTuner in worker options tuner, err := resourcetuner.NewResourceBasedTuner(resourcetuner.ResourceBasedTunerOptions{ TargetMem: 0.8, TargetCpu: 0.9, }) if err != nil { return err } workerOptions := worker.Options{ Tuner: tuner } // Combining different types options := DefaultResourceControllerOptions() options.MemTargetPercent = 0.8 options.CpuTargetPercent = 0.9 controller := NewResourceController(options) wfSS, err := worker.NewFixedSizeSlotSupplier(10) if err != nil { return err } actSS := &ResourceBasedSlotSupplier{controller: controller, options: defaultActivityResourceBasedSlotSupplierOptions()} laSS := &ResourceBasedSlotSupplier{controller: controller, options: defaultActivityResourceBasedSlotSupplierOptions()} nexusSS, err := worker.NewFixedSizeSlotSupplier(10) if err != nil { return err } compositeTuner, err := worker.NewCompositeTuner(worker.CompositeTunerOptions{ WorkflowSlotSupplier: wfSS, ActivitySlotSupplier: actSS, LocalActivitySlotSupplier: laSS, NexusSlotSupplier: nexusSS, }) if err != nil { return err } workerOptions := worker.Options{ Tuner: compositeTuner } ``` ### Java SDK ```java // Just resource based WorkerOptions.newBuilder() .setWorkerTuner( ResourceBasedTuner.newBuilder() .setControllerOptions( ResourceBasedControllerOptions.newBuilder(0.8, 0.9).build()) .build()) .build()) // Combining different types SlotSupplier workflowTaskSlotSupplier = new FixedSizeSlotSupplier<>(10); SlotSupplier activityTaskSlotSupplier = ResourceBasedSlotSupplier.createForActivity( resourceController, ResourceBasedTuner.DEFAULT_ACTIVITY_SLOT_OPTIONS); SlotSupplier localActivitySlotSupplier = ResourceBasedSlotSupplier.createForLocalActivity( resourceController, ResourceBasedTuner.DEFAULT_ACTIVITY_SLOT_OPTIONS); SlotSupplier nexusSlotSupplier = new FixedSizeSlotSupplier<>(10); WorkerOptions.newBuilder() .setWorkerTuner( new CompositeTuner( workflowTaskSlotSupplier, activityTaskSlotSupplier, localActivitySlotSupplier, nexusSlotSupplier)) .build(); ``` ### TypeScript SDK ```tsx // Just resource based const resourceBasedTunerOptions: ResourceBasedTunerOptions = { targetMemoryUsage: 0.8, targetCpuUsage: 0.9, }; const workerOptions = { tuner: { tunerOptions: resourceBasedTunerOptions, }, }; // Combining different types const resourceBasedTunerOptions: ResourceBasedTunerOptions = { targetMemoryUsage: 0.8, targetCpuUsage: 0.9, }; const workerOptions = { tuner: { activityTaskSlotSupplier: { type: 'resource-based', tunerOptions: resourceBasedTunerOptions, }, workflowTaskSlotSupplier: { type: 'fixed-size', numSlots: 10, }, localActivityTaskSlotSupplier: { type: 'resource-based', tunerOptions: resourceBasedTunerOptions, }, }, }; ``` ### Python SDK ```python --- # Just a resource based tuner, with poller autoscaling tuner = WorkerTuner.create_resource_based( target_memory_usage=0.5, target_cpu_usage=0.5, ) worker = Worker( client, task_queue="foo", tuner=tuner, workflow_task_poller_behavior=PollerBehaviorAutoscaling(), activity_task_poller_behavior=PollerBehaviorAutoscaling() ) --- # Combining different types, with poller autoscaling resource_based_options = ResourceBasedTunerConfig(0.8, 0.9) tuner = WorkerTuner.create_composite( workflow_supplier=FixedSizeSlotSupplier(10), activity_supplier=ResourceBasedSlotSupplier( ResourceBasedSlotConfig(), resource_based_options, ), local_activity_supplier=ResourceBasedSlotSupplier( ResourceBasedSlotConfig(), resource_based_options, ), ) worker = Worker( client, task_queue="foo", tuner=tuner, workflow_task_poller_behavior=PollerBehaviorAutoscaling(), activity_task_poller_behavior=PollerBehaviorAutoscaling() ) ``` ### .NET C# SDK ```csharp // Just resource based var worker = new TemporalWorker( Client, new TemporalWorkerOptions("my-task-queue") { Tuner = WorkerTuner.CreateResourceBased(0.8, 0.9), }); // Combining different types var resourceTunerOptions = new ResourceBasedTunerOptions(0.8, 0.9); var worker = new TemporalWorker( Client, new TemporalWorkerOptions("my-task-queue") { Tuner = new WorkerTuner( new FixedSizeSlotSupplier(10), new ResourceBasedSlotSupplier( new ResourceBasedSlotSupplierOptions(), resourceTunerOptions), new ResourceBasedSlotSupplier( new ResourceBasedSlotSupplierOptions(), resourceTunerOptions)), }); ``` ## Workflow Cache Tuning When the number of cached Workflow Executions reported by `sticky_cache_size` hits `workflowCacheSize` _or_ the number of threads reported by the `workflow_active_thread_count` metrics gauge hits `maxWorkflowThreadCount`, Workflow Executions will start to be evicted from the cache. An evicted Workflow Execution will need to be replayed when it gets any action that may advance it. If the Workflow Cache limits described above are hit, and Worker hosts have enough free RAM and are not close to reasonable thread limits, then you may choose to increase `workflowCacheSize` and `maxWorkflowThreadCount` limits to decrease the overall latency and cost of the Replays in the system. If the opposite occurs, consider decreasing the limits. :::note In CoreSDK based SDKs, like TypeScript, this metric works differently and should be monitored and adjusted on a per Worker and Task Queue basis. ::: ## Available Task Queue information {#task-queue-metrics} :::tip Support, stability, and dependency info The information listed in this section is readable using the `DescribeTaskQueueEnhanced` method in the [Go SDK](https://github.com/temporalio/sdk-go/blob/74320648ab0e4178b1fedde01672f9b5b9f6c898/client/client.go), with the [Temporal CLI](https://github.com/temporalio/cli/releases/tag/v1.1.0) `task-queue describe` command, and using `DescribeTaskQueue` through RPC. ::: The Temporal Service reports information separately for each Task Queue type (not aggregated). Use the following Task Queue properties to retrieve and evaluate information about Task Queue health and performance. Available data include: - [`ApproximateBacklogCount`](#ApproximateBacklogCountAndAge) and [`ApproximateBacklogAge`](#ApproximateBacklogCountAndAge) - [`TasksAddRate`](#TasksAddRate-and-TasksDispatchRate) and [`TasksDispatchRate`](#TasksAddRate-and-TasksDispatchRate) - [`BacklogIncreaseRate`](#BacklogIncreaseRate) (derived from [`TasksAddRate`](#TasksAddRate-and-TasksDispatchRate) and [`TasksDispatchRate`](#TasksAddRate-and-TasksDispatchRate)) ### `ApproximateBacklogCount` and `ApproximateBacklogAge` {#ApproximateBacklogCountAndAge} `ApproximateBacklogCount` represents the approximate count of Tasks currently backlogged in this Task Queue. The number may include expired Tasks as well as active Tasks, but it will eventually converge to the correct count over time. `ApproximateBacklogAge` returns the approximate age of the oldest Task in the backlog. The age is based on the creation time of the Task at the head of the queue. You can rely on both these counts when making scaling decisions. Please note: [Sticky queues](https://docs.temporal.io/sticky-execution) will affect these values, but only for a few seconds. That's because Tasks sent to Sticky queues are not included in the returned values for `ApproximateBacklogCount` and `ApproximateBacklogAge`. Inaccuracies diminish as the backlog grows. ### `TasksAddRate` and `TasksDispatchRate` {#TasksAddRate-and-TasksDispatchRate} Reports the approximate Tasks-per-second added to or dispatched from a Task Queue. This rate is averaged over the most recent 30-second time interval. The calculations include Tasks that were added to or dispatched from the backlog as well as Tasks that were immediately dispatched and bypassed the backlog (sync-matched). The actual Task delivery count may be significantly higher than the number reported by these two values: - Eager dispatch refers to a Temporal feature where Activities can be requested by an SDK using one Workflow Task completion response. Tasks using Eager dispatch do not pass through Task Queues. - Tasks passed to Sticky Task Queues not included in the returned values for `TasksAddRate` and `TasksDispatchRate`. ### `BacklogIncreaseRate` {#BacklogIncreaseRate} Approximates the _net_ Tasks per second added to the backlog, averaged over the most recent 30 seconds. This is calculated as: ``` TasksAddRate - TasksDispatchRate ``` - Positive values of `X` indicate the backlog is growing by about `X` Tasks per second. - Negative values of `X` indicate the backlog is shrinking by about `X` Tasks per second. While individual `add` and `dispatch` rates may be inaccurate due to Eager and Sticky Task Queues, the `BacklogIncreaseRate` reliably reflects the rate at which the backlog is shrinking or growing for backlogs older than a few seconds. ## Evaluate Task Queue performance {#evaluate-worker-loads} A [Task Queue](https://docs.temporal.io/task-queue) is a lightweight, dynamically allocated queue. [Worker Entities](/workers#worker-entity) poll the queue for [Tasks](https://docs.temporal.io/tasks#task) and retrieve Tasks to work on. Tasks are contexts that a Worker progresses using a specific Workflow Execution, Activity Execution, or a Nexus Task Execution. Each Task Queue type offers its Tasks to compatible Workers for Task completion. The Temporal Service dynamically creates different [Task Queue types](/task-queue) including Activity Task Queues, Workflow Task Queues, and Nexus Task Queues. With an accurate estimate of backlog Tasks, you can determine the optimal number of Workers to deploy. Balance your Worker count with the number of Tasks to achieve the best performance. This approach minimizes Task backlog saturation and reduces idle Workers. Task Queue data provide numerical insights into your Task Queue activity and backlog characteristics. Use these numbers to tune your production deployments. Evaluate your Worker loads and assess whether you need to scale up or reduce your Worker deployment. :::note RATE LIMITS [Visibility API rate limits](/cloud/limits#visibility-api-rate-limit) apply to Task Queue performance data requests. ::: ### Query Task Queue info with Temporal CLI {#cli-task-queue-info} The Temporal CLI helps you monitor and evaluate Worker performance. Issue the following command to display a list of active Workers that have recently polled a Task Queue: ``` temporal task-queue describe \ --task-queue YourTaskQueueName \ [additional options] ``` This command retrieves poller information, backlog statistics, and task reachability for Task types (available in Temporal Server v1.25.0, Temporal CLI 1.1 and later). :::warning Task reachability status is experimental. Determining Task reachability incurs a non-trivial computing cost. This feature may significantly change or be removed in a future release. ::: ### Query Task Queue info with the Go SDK {#go-sdk-task-queue-info} Retrieve Task Queue data using the Go SDK by calling `DescribeTaskQueueEnhanced`. Specify the Task Queue name and set `ReportStats` to `true`, as in the following example: ```go for _, taskQueueName := range taskQueueNames { resp, err := s.client.DescribeTaskQueueEnhanced(ctx, client.DescribeTaskQueueEnhancedOptions{ TaskQueue: taskQueueName, ReportStats: true, }) if err != nil { log.Printf("Error describing task queue %s: %v", taskQueueName, err) } // Get the backlog count from the enhanced response backlogCount += getBacklogCount(resp) } ``` ### Evaluate Worker availability and capacity issues {#worker-capacity-issues} Each Temporal [Server](https://docs.temporal.io/temporal-service/temporal-server) records the last time of each poll request. This time is displayed in the `temporal task-queue describe` output. - A `LastAccessTime` value exceeding one minute may indicate that the Worker fleet is at capacity or that Workers have shut down or been removed. - Values under 5 minutes typically suggest the Worker fleet is at capacity. "At capacity" means that all Workflow and Activity slots are full. - Values over 5 minutes since the last poll request usually suggest that Workers have shut down or been removed. Workers are removed if 5 minutes have passed since the last poll request. ### Manage your Worker fleet {#manage-your-worker-fleet} You can adjust the number of Workers to enhance Workflow Execution performance and manage your fleet size. For instance, a large backlog of Tasks with too few Workers will slow down Workflow Execution completions and decrease processing efficiency. Adding more Workers boosts speeds up completion rates and improves throughput. An empty backlog indicates low Worker utilization, allowing you to reduce your fleet and associated costs. The values provided by `temporal task-queue describe` can help you manage your Worker fleet deployment: - `ApproximateBacklogAge` shows how long Tasks have been waiting to be dispatched. If this time grows too long, more Workers can boost Workflow efficiency. - Calculate the demand per Worker by dividing the number of backlogged Tasks (`ApproximateBacklogCount`) by the number of Workers. Determine if your task processing rate is within an acceptable range for your needs using the per-Worker demand (how many Tasks each Worker has yet to process), the backlog consumption rate (`TasksDispatchRate`, the rate at which Workers are processing Tasks), and the dispatch latency (`ApproximateBacklogAge`, the time the oldest Task has been waiting to be assigned to a Worker). - The backlog increase rate (`BacklogIncreaseRate`) shows the changing demand on your Workers over time. As this rate increases, you may need to add more Workers until demand and capacity are balanced. As it decreases, you may be able to reduce your Worker fleet. ## Task Queue processing tuning {#task-queues-processing-tuning} The following steps limit delays in Task Queue processing due to insufficient or unbalanced Workers. Review these steps if you notice high `schedule_to_start` metrics. The steps are arranged in the recommended order of execution. ### Hosts and Resources provisioning If currently provisioned Worker hosts are fully utilized (near full CPU usage, high load average, etc), additional Workers hosts have to be provisioned to increase the capacity of the Workers pool. **It's possible to have too many Workers** Monitor the poll success (`poll_success`/`poll_success_sync`) and poll timeout `poll_timeouts` Server metric counters. Poll Success Rate = (`poll_success` + `poll_success_sync`) / (`poll_success` + `poll_success_sync` + `poll_timeouts`) Poll Success Rate should be >90% in most cases of systems with a steady load. For high volume and low latency, try to target >95%. If you see 1. low Poll Success Rate, and 2. low `schedule_to_start_latency`, and 3. low Worker hosts resource utilization at the same time, then you might have too many workers, consider sizing down. ### Worker Executor Slots sizing The main area to focus on when tuning is the number of Worker Executor Slots. Increase the maximum number of working slots by adjusting `maxConcurrentWorkflowTaskExecutionSize` or `maxConcurrentActivityExecutionSize` if both of the following conditions are met: 1. The Worker hosts are underutilized (no bottlenecks on CPU, load average, etc.). 2. The `worker_task_slots_available` metric from the corresponding Worker type frequently shows a depleted number of available Worker slots. Alternatively, consider using a resource-based slot supplier as described [here](#slot-suppliers). ### Poller count Sometimes, it can be appropriate to increase the number of task pollers. This is usually more common in situations where your Workers have somewhat high latency when communicating with the server. You can simply use automated poller tuning to handle this automatically. Consider manual adjustment if: 1. The Worker hosts are underutilized, for example, there are no bottlenecks on CPU, load average, etc. 2. `worker_task_slots_available` metric from the corresponding Worker type shows that a significant percentage of Worker slots are available on a regular basis. 3. The `schedule_to_start` metric is abnormally long. Then consider increasing the number of pollers by adjusting `maxConcurrentWorkflowTaskPollers` or `maxConcurrentActivityTaskPollers`, depending on which type of `schedule_to_start` metric is elevated. ### Rate Limiting If, after adjusting the poller and executors count as specified earlier, you still observe an elevated `schedule_to_start`, underutilized Worker hosts, or high `worker_task_slots_available`, you might want to check the following: - If server-side rate limiting per Task Queue is set by `WorkerOptions#maxTaskQueueActivitiesPerSecond`, remove the limit or adjust the value up. (See [Go](/develop/go/core-application#taskqueueactivitiespersecond) and [Java](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/worker/WorkerOptions.Builder.html).) - If Worker-side rate limiting per Worker is set by `WorkerOptions#maxWorkerActivitiesPerSecond`, remove the limit. (See [Go](/develop/go/core-application#workeractivitiespersecond), [TypeScript](https://typescript.temporal.io/api/interfaces/worker.WorkerOptions#maxconcurrentactivitytaskexecutions), and [Java](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/worker/WorkerOptions.Builder.html).) ## Related reading - [Workers in production operation guide](https://temporal.io/blog/workers-in-production) - [Full set of SDK Metrics reference](/references/sdk-metrics) --- ## What is a Temporal Activity? This guide provides a comprehensive overview of Temporal Activities including [Activity Definition](/activity-definition), [Activity Type](/activity-definition#activity-type), [Activity Execution](/activity-execution), and [Local Activity](/local-activity). An Activity is a normal function or method that executes a single, well-defined action (either short or long running), such as calling another service, transcoding a media file, or sending an email message. Activity code can be non-deterministic. We recommend that it be [idempotent](/activity-definition#idempotency). Activities are the most common Temporal primitive and encompass small units of work such as: - Single write operations, like updating user information or submitting a credit card payment - Batches of similar writes, like creating multiple orders or sending multiple messages - One or more read operations followed by a write operation, like checking an product status and user address before updating an order status - A read that should be memoized, like an LLM call, a large download, or a slow-polling read Larger pieces of functionality should be broken up into multiple activities. This makes it easier to do failure recovery, have short timeouts, and be idempotent. Workflow code orchestrates the execution of Activities, persisting the results. If an Activity Function Execution fails, any future execution starts from initial state (except [Heartbeats](/encyclopedia/detecting-activity-failures#activity-heartbeat)). Activity Functions are executed by Worker Processes. When the Activity Function returns, the Worker sends the results back to the Temporal Service as part of the [ActivityTaskCompleted](/references/events#activitytaskcompleted) Event. The Event is added to the Workflow Execution's Event History. For other Activity-related Events, see [Activity Events](/workflow-execution/event#activity-events). --- ## Activity Definition This page discusses the following: - [Activity Definition](#activity-definition) - [Idempotency](#idempotency) - [Constraints](#activity-constraints) - [Parameters](#activity-parameters) - [Activity Type](#activity-type) In day-to-day conversation, the term _Activity_ denotes an [Activity Definition](/activity-definition), [Activity Type](/activity-definition#activity-type), or [Activity Execution](/activity-execution). Temporal documentation aims to be explicit and differentiate between them. ## What is an Activity Definition? {#activity-definition} An Activity Definition is the code that defines the constraints of an [Activity Task Execution](/tasks#activity-task-execution). Activities encapsulate business logic that is prone to failure, allowing for automatic retries when issues occur. - [How to develop an Activity Definition using the Go SDK](/develop/go/core-application#activity-definition) - [How to develop an Activity Definition using the Java SDK](/develop/java/core-application#develop-activities) - [How to develop an Activity Definition using the PHP SDK](/develop/php/core-application#develop-activities) - [How to develop an Activity Definition using the Python SDK](/develop/python/core-application#develop-activities) - [How to develop an Activity Definition using the TypeScript SDK](/develop/typescript/core-application#develop-activities) - [How to develop an Activity Definition using the .NET SDK](/develop/dotnet/core-application#develop-activity) The term 'Activity Definition' is used to refer to the full set of primitives in any given language SDK that provides an access point to an Activity Function Definition——the method or function that is invoked for an [Activity Task Execution](/tasks#activity-task-execution). Therefore, the terms Activity Function and Activity Method refer to the source of an instance of an execution. Activity Definitions are named and referenced in code by their [Activity Type](/activity-definition#activity-type). ### Idempotency {#idempotency} Temporal recommends that Activities be idempotent. Idempotence means that performing an operation multiple times has the same result as performing it once. In the context of Temporal, Activities should be designed to be safely executed multiple times without causing unexpected or undesired side effects. Consider the power button on your laptop. When you press it, the machine is changed from one state to the other, from on to off, and vice versa. This is not an idempotent operation. Each invocation leads to a different state. However, imagine that you modified your laptop to have separate on and off buttons. Pressing the On button multiple times would have no effect beyond the initial invocation as the laptop is already on. This action is considered idempotent. Idempotency is an important design consideration in software applications as well. You have probably encountered idempotent operations in your work already. A few examples where idempotent operations are vital would be: - **Infrastructure-as-Code (IaC) tool** - Conserving resources is important when you're provisioning infrastructure in the cloud. An IaC system that was not designed with idempotence in mind could lead to high costs if the function to provision a new server was accidentally invoked multiple times. An IaC tool that is designed with idempotence in mind ensures that multiple invocations of the tool doesn't lead to unintended instances being created. - **Payment processing system** - A payment processing system must charge the customer only once for a given purchase. If the system was not designed to be idempotent, duplicate requests would result in extra charges and unhappy customers. A payment processing system that is designed to be idempotent ensures customers are not charged multiple times for the same transaction, preventing financial discrepancies. :::info By design, completed Activities will not re-execute as part of a [Workflow Replay](/workflow-execution#replay). However, Activities won’t record to the [Event History](/encyclopedia/retry-policies#event-history) until they return or produce an error. If an Activity fails to report to the server at all, it will be retried. Designing for idempotence, especially if you have a [Global Namespace](/global-namespace), will improve reusability and reliability. ::: An Activity is idempotent if multiple [Activity Task Executions](/tasks#activity-task-execution) do not change the state of the system beyond the first Activity Task Execution. The lack of idempotency might affect the correctness of your application but does not affect the Temporal Platform. In other words, lack of idempotency doesn't lead to a platform error. In some cases, whether something is idempotent doesn't affect the correctness of an application. For example, if you have a monotonically incrementing counter, you might not care that retries increment the counter because you don't care about the actual value, only that the current value is greater than a previous value. You should always make your business logic Activities idempotent in Temporal. Because Activities may be retried, these functions may be executed more than once. A non-idempotent Activity could adversely affect the state of the system. Activities are an atomic unit of execution within Temporal. They are invoked and either complete successfully or not. Take this into consideration when you design your Activities. For example, consider an Activity that has the following three steps: 1. Perform a database lookup 2. Make a call to a microservice with parameters retrieved from the database 3. Write the result of the microservice call to the filesystem Imagine that the first two steps succeed, but the third step fails due to a permissions issue. During retry, the entire Activity—and therefore each of the three steps—is executed again. To maintain idempotency, design your Activities to be more granular. In this case, you could have three Activities, one for each step. This way, only the step that failed will be executed again. However, you must balance this against the potential for a larger Event History, since there would now be three Activity Executions instead of one. Idempotence for Activities is also important due to a particular edge case inherent in distributed computing. Consider a scenario in which a Worker polls the Temporal Service, accepts the Activity Task, and begins executing the Activity. The Activity function completes successfully, but the Worker crashes just before it notifies the Temporal Service. In this case, the Event History won’t reflect the successful completion of the Task, so the Activity will be retried. If the Activity is not idempotent, this could have negative consequences, such as duplicate charges in a payment processing scenario. You can achieve idempotency in your application through the use of unique identifiers, known as idempotency keys, which are used to detect duplicate requests. These are enforced by the service you are calling from your Activity, not by the Activity itself. For example, the APIs provided by most payment processors allow the client to include an idempotency key with the request. When the payment service receives a request, it checks a database to determine whether there has already been a request with this key. If so, the duplicate request is ignored and does not result in another charge. If not, then it writes a new record to the database with this key, allowing it to identify duplicate requests in the future. In Temporal, the request to the payment service would be made from within an Activity. You can use a combination of the Workflow Run ID and the Activity ID as an idempotency key since this is guaranteed to be consistent across retry attempts but unique among Workflow Executions. For more information about idempotency in Temporal, see the following post: [Idempotency and Durable Execution](https://temporal.io/blog/idempotency-and-durable-execution) ### Activity retry policy The Activity retry mechanism gives applications the benefits of durable execution. For example, Temporal will keep track of the [exponential backoff delay](/encyclopedia/retry-policies#backoff-coefficient) even if the Worker crashes. Since Temporal can’t tell when a Worker crashes, Workflows rely on the [start_to_close timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout) to know how long to wait before assuming that an Activity is inactive. For an Activity with a [Retry Policy](/encyclopedia/retry-policies) that allows retries, Temporal guarantees that the Activity will be observed as completed exactly once. However, the Activity may be executed multiple times and may even partially complete more than once during this process. This could lead to a scenario where certain parts of the Activity are executed multiple times before a successful execution is completed. :::caution Be cautious when doing retries within your Activity because it lengthens the needed Activity timeout. Such internal retries also prevent users from counting failure metrics and make it harder for users to debug in Temporal UI when something is wrong. ::: ### Constraints {#activity-constraints} Activity Definitions are executed as normal functions. In the event of failure, the function begins at its initial state when retried (except when Activity Heartbeats are established). Therefore, an Activity Definition has no restrictions on the code it contains. ### Parameters {#activity-parameters} An Activity Definition can support as many parameters as needed. All values passed through these parameters are recorded in the [Event History](/workflow-execution/event#event-history) of the Workflow Execution. Return values are also captured in the Event History for the calling Workflow Execution. Activity Definitions must contain the following parameters: - Context: an optional parameter that provides Activity context within multiple APIs. - Heartbeat: a notification from the Worker to the Temporal Service that the Activity Execution is progressing. Cancelations are allowed only if the Activity Definition permits Heartbeating. - Timeouts: intervals that control the execution and retrying of Activity Task Executions. Other parameters, such as [Retry Policies](/encyclopedia/retry-policies) and return values, can be seen in the implementation guides, listed in the next section. ## What is an Activity Type? {#activity-type} An Activity Type is the mapping of a name to an Activity Definition. Activity Types are scoped through Task Queues. ## Best practices for defining Activities Here are some best practices you can use when you are creating an Activities for your Workflow: - Activity arguments and return values should be serializable. - Activities that perform writes should be idempotent. - Activities have [timeouts](/develop/python/failure-detection#heartbeat-timeout) and [retry policies](/encyclopedia/retry-policies). For Activities, your operation should either complete within a few minutes or it should support the ability to heartbeat or poll for a result. This way it will be clear to the Workflow when the Activity is still making progress. - You need to specify at least one timeout, typically the [start_to_close timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout). Keep in mind that the shorter the timeout, the faster Temporal will retry upon failure. See the [Activity retry policy section](#activity-retry-policy) to learn more. --- ## Activity Execution This page discusses the following: - [Activity Execution](#activity-execution) - [Cancellation](#cancellation) - [Activity Id](#activity-id) - [Asynchronous Activity Completion](#asynchronous-activity-completion) - [Task Token](#task-token) ## What is an Activity Execution? {#activity-execution} An Activity Execution is the full chain of [Activity Task Executions](/tasks#activity-task-execution). :::info - [How to start an Activity Execution using the Go SDK](/develop/go/core-application#activity-execution) - [How to start an Activity Execution using the Java SDK](/develop/java/core-application#activity-execution) - [How to start an Activity Execution using the PHP SDK](/develop/php/core-application#activity-execution) - [How to start an Activity Execution using the Python SDK](/develop/python/core-application#activity-execution) - [How to start an Activity Execution using the TypeScript SDK](/develop/typescript/core-application#activity-execution) - [How to start an Activity Execution using the .NET SDK](/develop/dotnet/core-application#activity-execution) ::: You can customize [Activity Execution timeouts](/encyclopedia/detecting-activity-failures#start-to-close-timeout) and [retry policies](/encyclopedia/retry-policies). If an Activity Execution fails (because it exhausted all retries, threw a [non-retryable error](/encyclopedia/retry-policies#non-retryable-errors), or was canceled), the error is returned to the [Workflow](/workflows), which decides how to handle it. :::note Temporal guarantees that an Activity Task either runs or timeouts. There are multiple failure scenarios when an Activity Task is lost. It can be lost during delivery to a Worker or after the Activity Function is called and the Worker crashed. Temporal doesn't detect task loss directly. It relies on [Start-To-Close timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout). If the Activity Task times out, the Activity Execution will be retried according to the Activity Execution Retry Policy. In scenarios where the Activity Execution Retry Policy is set to `1` and a Timeout occurs, the Activity Execution will not be tried. ::: ## Cancellation {#cancellation} Activity Cancellation: - lets the Activity know it doesn't need to keep doing work, and - gives the Activity time to clean up any resources it has created. Activities must heartbeat to receive cancellations from a Temporal Service. An Activity may receive Cancellation if: - The Activity was requested to be Cancelled. This can often cascade from Workflow Cancellation, but not always—SDKs have ways to stop Cancellation from cascading. {/* TODO link to workflow cancellation */} - The Activity was considered failed by the Server because any of the Activity timeouts have triggered (for example, the Server didn't receive a heartbeat within the Activity's Heartbeat timeout). The [Cancelled Failure](/references/failures#cancelled-failure) that the Activity receives will have `message: 'TIMED_OUT'`. - The Workflow Run reached a [Closed state](/workflow-execution#workflow-execution-status), in which case the Cancelled Failure will have `message: 'NOT_FOUND'`. - In some SDKs: - The Worker is shutting down. - An Activity sends a Heartbeat but the Heartbeat details can't be converted by the Worker's configured [Data Converter](/dataconversion). This fails the Activity Task Execution with an Application Failure. - The Activity timed out on the Worker side and is not Heartbeating or the Temporal Service hasn't relayed a Cancellation. There are different ways to receive Cancellation depending on the SDK. {/* TODO link to dev guide */} An Activity may accept or ignore Cancellation: - To allow Cancellation to happen, let the Cancellation Failure propagate. - To ignore Cancellation, catch it and continue executing. Some SDKs have ways to shield tasks from being stopped while still letting the Cancellation propagate. The Workflow can also decide if it wants to wait for the Activity Cancellation to be accepted or to proceed without waiting. Cancellation can only be requested a single time. If you try to cancel your Activity Execution more than once, it will not receive more than one Cancellation request. ## What is an Activity Id? {#activity-id} The identifier for an [Activity Execution](#activity-execution). The identifier can be generated by the system, or it can be provided by the Workflow code that spawns the Activity Execution. The identifier is unique among the open Activity Executions of a [Workflow Run](/workflow-execution/workflowid-runid#run-id). (A single Workflow Run may reuse an Activity Id if an earlier Activity Execution with the same Id has closed.) An Activity Id can be used to [complete the Activity asynchronously](#asynchronous-activity-completion). ## What is Asynchronous Activity Completion? {#asynchronous-activity-completion} Asynchronous Activity Completion is a feature that enables an Activity Function to return without causing the Activity Execution to complete. The Temporal Client can then be used from anywhere to both Heartbeat Activity Execution progress and eventually complete the Activity Execution and provide a result. How to complete an Activity Asynchronously in: - [Go](/develop/go/asynchronous-activity-completion) - [Java](/develop/java/asynchronous-activity-completion) - [PHP](/develop/php/asynchronous-activity-completion) - [Python](/develop/python/asynchronous-activity-completion) - [TypeScript](/develop/typescript/asynchronous-activity-completion) - [.NET](/develop/dotnet/asynchronous-activity) ### When to use Async Completion When an external system has the final result of a computation that is started by an Activity, there are three main ways of getting the result to the Workflow: 1. The external system uses Async Completion to complete the Activity with the result. 2. The Activity completes normally, without the result. Later, the external system sends a Signal to the Workflow with the result. 3. A subsequent Activity [polls the external system](https://community.temporal.io/t/what-is-the-best-practice-for-a-polling-activity/328/2) for the result. If you don't have control over the external system — that is, you can't add Async Completion or a Signal to its code — then: - you can poll (#3), or - if the external system can reliably call a webhook (and retry calling in the case of failure), you can write a webhook handler that sends a Signal to the Workflow (#2). The decision between using #1 vs #2 involves a few factors. Use Async Completion if: - the external system is unreliable and might fail to Signal, or - you want the external process to Heartbeat or receive Cancellation. Otherwise, if the external system can reliably be trusted to do the task and Signal back with the result, and it doesn't need to Heartbeat or receive Cancellation, then you may want to use Signals. The benefit to using Signals has to do with the timing of failure retries. For example, consider an external process that is waiting for a human to review something and respond, and they could take up to a week to do so. If you use Async Completion (#1), you would: - set a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout) of one week on the Activity, - in the Activity, notify the external process you need the human review, and - have the external process Asynchronously Complete the Activity when the human responds. If the Activity fails on the second step to notify the external system and doesn't throw an error (for example, if the Worker dies), then the Activity won't be retried for a week, when the Start-To-Close Timeout is hit. If you use Signals, you would: - set a [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout) of one minute on the Activity, - in the Activity, notify the external process you need the human review, - complete the Activity without the result, and - have the external process Signal the Workflow when the human responds. If the Activity fails on the second step to notify the external system and doesn't throw an error, then the Activity will be retried in a minute. In the second scenario, the failure is retried sooner. This is particularly helpful in scenarios like this in which the external process might take a long time. ### What is a Task Token? {#task-token} A Task Token is a unique identifier for an [Activity Task Execution](/tasks#activity-task-execution). [Asynchronous Activity Completion](#asynchronous-activity-completion) calls take either of the following as arguments: - a Task Token, or - an [Activity Id](#activity-id), a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id), and optionally a [Run Id](/workflow-execution/workflowid-runid#run-id). Since a Task Token is unique for an Activity execution, retries can cause a remote service using the Task Token to end up with an invalid one. For example, an Activity might fail after passing its current Task Token to a remote service, but before returning the complete async error, leaving that service with a Task Token that's no longer valid. To avoid this risk, you can provide the Activity Id and Workflow Id to the remote service instead of the Task Token. --- ## Local Activity This page discusses [Local Activity](#local-activity). ## What is a Local Activity? {#local-activity} A Local Activity is an [Activity Execution](/activity-execution) that executes in the same process as the [Workflow Execution](/workflow-execution) that spawns it. Some Activity Executions are very short-living and do not need the queuing semantic, flow control, rate limiting, and routing capabilities. For this case, Temporal supports the Local Activity feature. The main benefit of Local Activities is that they use less Temporal Service resources (for example, fewer History events) and have much lower latency overhead (because no need to roundtrip to the Temporal Service) compared to normal Activity Executions. However, Local Activities are subject to shorter durations and a lack of rate limiting. Consider using Local Activities for functions that are the following: - can be implemented in the same binary as the Workflow that calls them. - do not require global rate limiting. - do not require routing to a specific Worker or Worker pool. - no longer than a few seconds, inclusive of retries. If it takes longer than 80% of the Workflow Task Timeout (which is 10 seconds by default), the Worker will ask the Temporal Service to create a new Workflow Task to extend the "lease" for processing the Local Activity. The Worker will continue doing so until the Local Activity has completed. This is called Workflow Task Heartbeating. The drawbacks of long-running Local Activities are: - Each new Workflow Task results in 3 more Events in History. - The Workflow won't get notified of new events like Signals and completions until the next Workflow Task Heartbeat. - New Commands created by the Workflow concurrently with the Local Activity will not be sent to the Temporal Service until either the Local Activity completes or the next Workflow Task Heartbeat. Using a Local Activity without understanding its limitations can cause various production issues. **We recommend using regular Activities unless your use case requires very high throughput and large Activity fan outs of very short-lived Activities.** More guidance in choosing between [Local Activity vs Activity](https://community.temporal.io/t/local-activity-vs-activity/290/3) is available in our forums. --- ## Child Workflows A Child Workflow Execution is a [Workflow Execution](/workflow-execution) that is spawned from within another Workflow in the same Namespace. - [Go SDK Child Workflow feature guide](/develop/go/child-workflows) - [Java SDK Child Workflow feature guide](/develop/java/child-workflows) - [PHP SDK Child Workflow feature guide](/develop/php/child-workflows) - [Python SDK Child Workflow feature guide](/develop/python/child-workflows) - [TypeScript SDK Child Workflow feature guide](/develop/typescript/child-workflows) - [.NET SDK Child Workflow feature guide](/develop/dotnet/child-workflows) A Workflow Execution can be both a Parent and a Child Workflow Execution because any Workflow can spawn another Workflow. A Parent Workflow Execution must await on the Child Workflow Execution to spawn. The Parent can optionally await on the result of the Child Workflow Execution. Consider the Child's [Parent Close Policy](/parent-close-policy) if the Parent does not await on the result of the Child, which includes any use of Continue-As-New by the Parent. :::note Child Workflows do not carry over when the Parent uses [Continue-As-New](/workflow-execution/continue-as-new). This means that if a Parent Workflow Execution uses Continue-As-New, any ongoing Child Workflow Executions will not be retained in the new continued instance of the Parent. ::: When a Parent Workflow Execution reaches a Closed status, the Temporal Service propagates Cancellation Requests or Terminations to Child Workflow Executions depending on the Child's [Parent Close Policy](/parent-close-policy). If a Child Workflow Execution uses Continue-As-New, from the Parent Workflow Execution's perspective the entire chain of Runs is treated as a single execution. ## When to use Child Workflows There is no reason to use Child Workflows just for code organization. You can use object oriented structure and other code organization techniques to deal with complexities. It is typically recommended to start from a single Workflow Definition if your problem has bounded size in terms of the number of Activity Executions and processed Signals. It is simpler than multiple asynchronously communicating Workflows. However, there are several valid reasons for using Child Workflows. ### Create a separate service Because a Child Workflow Execution can be processed by a completely separate set of [Workers](/workers#worker) than the Parent Workflow Execution, it can act as an entirely separate service. However, this also means that a Parent Workflow Execution and a Child Workflow Execution do not share any local state. As all Workflow Executions, they can communicate only via asynchronous [Signals](/sending-messages#sending-signals). ### Partition problems into smaller chunks An individual Workflow Execution has an [Event History](/workflow-execution/event#event-history) size limit, which imposes a couple of considerations for using Child Workflows. On one hand, because Child Workflow Executions have their own Event Histories, they are often used to partition large workloads into smaller chunks. For example, a single Workflow Execution does not have enough space in its Event History to spawn 100,000 [Activity Executions](/activity-execution). But a Parent Workflow Execution can spawn 1,000 Child Workflow Executions that each spawn 1,000 Activity Executions to achieve a total of 1,000,000 Activity Executions. However, because a Parent Workflow Execution Event History contains [Events](/workflow-execution/event#event) that correspond to the status of the Child Workflow Execution, a single Parent should not spawn more than 1,000 Child Workflow Executions. In general, however, Child Workflow Executions result in more overall Events recorded in Event Histories than Activities. Because each entry in an Event History is a _cost_ in terms of compute resources, this could become a factor in very large workloads. Therefore, we recommend starting with a single Workflow implementation that uses Activities until there is a clear need for Child Workflows. ### Represent a single resource As all Workflow Executions, a Child Workflow Execution can create a one to one mapping with a resource. It can be used to manage the resource using its ID to guarantee uniqueness. For example, a Workflow that manages host upgrades could spawn a Child Workflow Execution per host (hostname being a Workflow ID) and use them to ensure that all operations on the host are serialized. ### Periodic logic execution A Child Workflow can be used to execute some periodic logic without overwhelming the Parent Workflow Event History. In this scenario, the Parent Workflow starts a Child Workflow which executes periodic logic calling [Continue-As-New](/workflow-execution/continue-as-new) as many times as needed, then completes. From the Parent point of view, it is just a single Child Workflow invocation. ### Child Workflow versus an Activity Child Workflow Executions and Activity Executions are both started from Workflows, so you might feel confused about when to use which. Here are some important differences: - A Child Workflow has access to all Workflow APIs but is subject to the same [deterministic constraints](/workflow-definition#deterministic-constraints) as other Workflows. An Activity has the inverse pros and cons—no access to Workflow APIs but no Workflow constraints. - A Child Workflow Execution can continue on if its Parent is canceled with a [Parent Close Policy](/parent-close-policy) of `ABANDON`. An Activity Execution is _always_ canceled when its Workflow Execution is canceled. (It can react to a cancellation Signal for cleanup.) The decision is roughly analogous to spawning a child process in a terminal to do work versus doing work in the same process. - Temporal tracks all state changes within a Child Workflow Execution in Event History. Only the input, output, and retry attempts of an Activity Execution is tracked. A Workflow models composite operations that consist of multiple Activities or other Child Workflows. An Activity usually models a single operation on the external world. Our advice: **When in doubt, use an Activity.** --- ## Parent Close Policy This page discusses [Parent Close Policy](#parent-close-policy). ## What is a Parent Close Policy? {#parent-close-policy} A Parent Close Policy determines what happens to a Child Workflow Execution if its Parent changes to a Closed status (Completed, Failed, or Timed out). - [How to set a Parent Close Policy using the Go SDK](/develop/go/child-workflows#parent-close-policy) - [How to set a Parent Close Policy using the Java SDK](/develop/java/child-workflows#parent-close-policy) - [How to set a Parent Close Policy using the PHP SDK](/develop/php/child-workflows#parent-close-policy) - [How to set a Parent Close Policy using the Python SDK](/develop/python/child-workflows#parent-close-policy) - [How to set a Parent Close Policy using the TypeScript SDK](/develop/typescript/child-workflows#parent-close-policy) - [How to set a Parent Close Policy using the .NET SDK](/develop/dotnet/child-workflows#parent-close-policy) There are three possible values: - **Abandon:** the Child Workflow Execution is not affected. - **Request Cancel:** a Cancellation request is sent to the Child Workflow Execution. - **Terminate** (default): the Child Workflow Execution is forcefully Terminated. [`ParentClosePolicy`](https://github.com/temporalio/api/blob/c1f04d0856a3ba2995e92717607f83536b5a44f5/temporal/api/enums/v1/workflow.proto#L44) proto definition. Each Child Workflow Execution may have its own Parent Close Policy. This policy applies only to Child Workflow Executions and has no effect otherwise. You can set policies per child, which means you can opt out of propagating terminates / cancels on a per-child basis. This is useful for starting Child Workflows asynchronously (see [relevant issue here](https://community.temporal.io/t/best-way-to-create-an-async-child-workflow/114) or the corresponding SDK docs). --- ## Codec Server This page discusses [Codec Server](#codec-server). ## What is a Codec Server? {#codec-server} A Codec Server is an HTTP/HTTPS server that uses a [custom Payload Codec](/production-deployment/data-encryption) to decode your data remotely through endpoints. {/* This should not have changed with tctl-to-temporal */} A Codec Server follows the Temporal [Codec Server Protocol](https://github.com/temporalio/samples-go/tree/main/codec-server#codec-server-protocol). It implements two endpoints: - `/encode` - `/decode` Each endpoint receives and responds with a JSON body that has a `payloads` property with an array of [Payloads](/dataconversion#payload). The endpoints run the Payloads through a [Payload Codec](/payload-codec) before returning them. Most SDKs provide example Codec Server implementation samples, listed here: - [Go](https://github.com/temporalio/samples-go/tree/main/codec-server) - [Java](https://github.com/temporalio/sdk-java/tree/master/temporal-remote-data-encoder) - [.NET](https://github.com/temporalio/samples-dotnet/tree/main/src/Encryption) - [Python](https://github.com/temporalio/samples-python/blob/main/encryption/codec_server.py) - [TypeScript](https://github.com/temporalio/samples-typescript/blob/main/encryption/src/codec-server.ts) #### Usage When you apply custom encoding with encryption or compression on your Workflow data, it is stored in the encrypted/compressed format on the Temporal Server. For details on what data is encoded, see [Securing your data](/production-deployment/data-encryption). To see decoded data when using the Temporal CLI or Web UI to perform some operations on a Workflow Execution, configure the Codec Server endpoint in the Web UI and the Temporal CLI. When you configure the Codec Server endpoints, the Temporal CLI and Web UI send the encoded data to the Codec Server, and display the decoded data received from the Codec Server. For details on creating your Codec Server, see [Codec Server Setup](/production-deployment/data-encryption#codec-server-setup). --- ## How does Temporal handle application data? This guide provides an overview of data handling using a Data Converter on the Temporal Platform. Data Converters in Temporal are SDK components that handle the serialization and encoding of data entering and exiting a Temporal Service. Workflow inputs and outputs need to be serialized and deserialized so they can be sent as JSON to a Temporal Service. The Data Converter encodes data from your application to a [Payload](/dataconversion#payload) before it is sent to the Temporal Service in the Client call. When the Temporal Server sends the encoded data back to the Worker, the Data Converter decodes it for processing within your application. This ensures that all your sensitive data exists in its original format only on hosts that you control. Data Converter steps are followed when data is sent to a Temporal Service (as input to a Workflow) and when it is returned from a Workflow (as output). Due to how Temporal provides access to Workflow output, this implementation is asymmetric: - Data encoding is performed automatically using the default converter provided by Temporal or your custom Data Converter when passing input to a Temporal Service. For example, plain text input is usually serialized into a JSON object. - Data decoding may be performed by your application logic during your Workflows or Activities as necessary, but decoded Workflow results are never persisted back to the Temporal Service. Instead, they are stored encoded on the Temporal Service, and you need to provide an additional parameter when using [`temporal workflow show`](/cli/workflow#show) or when browsing the Web UI to view output. Each piece of data (like a single argument or return value) is encoded as a [Payload](/dataconversion#payload), which consists of binary data and key-value metadata. For details, see the API references: - [Go](https://pkg.go.dev/go.temporal.io/sdk/converter#DataConverter) - [Java](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/common/converter/DataConverter.html) - [Python](https://python.temporal.io/temporalio.converter.DataConverter.html) - [TypeScript](https://typescript.temporal.io/api/interfaces/common.DataConverter) ### What is a Payload? {#payload} A [Payload](https://api-docs.temporal.io/#temporal.api.common.v1.Payload) represents binary data such as input and output from Activities and Workflows. Payloads also contain metadata that describe their data type or other parameters for use by custom encoders/converters. When processed through the SDK, the [default Data Converter](/default-custom-data-converters#default-data-converter) serializes your data/value to a Payload before sending it to the Temporal Server. The default Data Converter processes supported type values to Payloads. You can create a custom [Payload Converter](/payload-converter) to apply different conversion steps. You can additionally apply [custom codecs](/payload-codec), such as for encryption or compression, on your Payloads. --- ## Default and Custom Data Converters This page discusses the following: - [Default Data Converter](#default-data-converter) - [Custom Data Converter](#custom-data-converter) ## What is a default Data Converter? {#default-data-converter} Each Temporal SDK includes and uses a default Data Converter. The default Data Converter converts objects to bytes using a series of Payload Converters and supports binary, Protobufs, and JSON formats. It encodes values in the following order: - Null - Byte array - Protobuf JSON - JSON In SDKs that cannot determine parameter types at runtime (for example, TypeScript), Protobufs aren't included in the default converter. For example: - If a value is an instance of a Protobuf message, it is encoded with [proto3 JSON](https://developers.google.com/protocol-buffers/docs/proto3#json). - If a value isn't null, binary, or a Protobuf, it is encoded as JSON. Most common input types — including strings, integers, floating point numbers, and booleans — are serializable as JSON. If any part of it is not serializable as JSON, {/* (for example, a Date—see JSON data types) */} an error is thrown. The default Data Converter serializes objects based on their root type, rather than nested types. The JSON serializers of some SDKs cannot process lists with Protobuf children objects without implementing a [custom Data Converter](#custom-data-converter). ## What is a custom Data Converter? {#custom-data-converter} A custom Data Converter extends the default Data Converter with custom logic for [Payload](/dataconversion#payload) conversion or encoding. You can create a custom Data Converter to alter formats (for example, using [MessagePack](https://msgpack.org/) instead of JSON) or add compression and encryption. A Payload Codec encodes and decodes [Payloads](/dataconversion#payload), with bytes-to-bytes conversion. To use custom encryption or compression logic, create a custom Payload Codec with your encryption/compression logic in the `encode` function and your decryption/decompression logic in the `decode` function. To implement a custom Payload Codec, you can override the default Data Converter, or create a customized Data Converter that defines its own Payload Converter. Custom Data Converters are not applied to all data; for example, [Search Attributes](/search-attribute) are persisted unencoded so they can be indexed for searching. A customized Data Converter can have the following three components: - [Payload Converter](/payload-converter) - [Payload Codec](/payload-codec) - [Failure Converter](/failure-converter) For details on how to implement custom encryption and compression in your SDK, see [Data Encryption](/production-deployment/data-encryption). --- ## Failure Converter This page discusses [Failure Converter](#failure-converter). ## What is a Failure Converter? {#failure-converter} As with input and output, Temporal also uses its own default converter logic for errors that are generated by Workflows. The default Failure Converter copies error messages and call stacks as plain text, and this text output is then directly accessible in the `Message` field of these Workflow Executions. This may be undesirable for your application. In some cases, errors could contain privileged or sensitive information that you would need to prevent from leaking or being available via a side channel. Failure messages and call stacks are not encoded as codec-capable Payloads by default; you must explicitly enable encoding these common attributes on failures. If your errors might contain sensitive information, you can encrypt the message and call stack by configuring the default Failure Converter to use your encoding. This moves your `message` and `stack_trace` fields to a Payload that's run through your codec. For example, with the Temporal Go SDK, you can do this by adding a `FailureConverter` parameter to your `client.Options{}` array when calling `client.Dial()`. The `FailureConverter` should override the `DefaultFailureConverterOptions{}` by setting `EncodeCommonAttributes: true` like so: ```go c, err := client.Dial(client.Options{ // Set DataConverter here to ensure that workflow inputs and results are // encoded as required. DataConverter: mycustom.DataConverter, FailureConverter: temporal.NewDefaultFailureConverter(temporal.DefaultFailureConverterOptions{ EncodeCommonAttributes: true, }), }) ``` If for some reason you need to specify a different set of Converter logic for your Failures, you can replace the `NewDefaultFailureConverter` with a custom method. For example, if you are both working with highly sensitive data and using a sophisticated logging/observability implementation, you may need to implement different encryption methods for each of them. --- ## Key Management This page discusses [Key Management](#key-management). ## What is Key Management? {#key-management} Key Management is a fundamental part of working with encryption keys. There are many computational and logistical aspects to generating and rotating keys, and this usually calls for a dedicated application in your stack. Here are some general recommendations for working with encryption keys for Temporal applications: - [Symmetric Encryption](https://en.wikipedia.org/wiki/Symmetric-key_algorithm) is generally faster and will produce smaller payloads than asymmetric. Normally, an advantage of _asymmetric_ encryption is that it allows you to distribute your encryption and decryption keys separately, but depending on your infrastructure, this might not offer any security benefits with Temporal. - AES-based algorithms are [hardware accelerated in Go](https://pkg.go.dev/crypto/aes) and other languages. AES algorithms are widely vetted and trusted, and there are many different variants that may suit your requirements. Load tests using `ALG_AES_256_GCM_HKDF_SHA512_COMMIT_KEY` have performed well. - Store your encryption keys in the same manner as you store passwords, config details, and other sensitive data. When possible, load the key into your application, so you don't need to make a network call to retrieve it. Separate keys for each environment or namespace as much as possible. - Make sure you have a key rotation strategy in place in the event that your keys are compromised or need to be replaced for another reason. Consider using a dedicated secrets engine or a key management system (KMS). Note that when you rotate keys, you may also need to retain old keys to query old Workflows. ### Key Rotation National Institute of Standards and Technology (NIST) guidance recommends periodic rotation of encryption keys. For AES-GCM keys, rotation should occur before approximately 2^32 encryptions have been performed by a key version, following the guidelines of NIST publication 800-38D. It is recommended that operators estimate the encryption rate of a key and use that to determine a frequency of rotation that prevents the guidance limits from being reached. For example, if one determines that the estimated rate is 40 million operations per day, then rotating a key every three months is sufficient. Key rotation should generally be transparent to the Temporal Data Converter implementation. Temporal's `Encode()` and `Decode()` steps only need to trigger as expected, and Temporal has no knowledge of how or when you are generating your encryption keys. You should design your Encode and Decode steps to accept all the necessary parameters for your key management, such as the key version, alongside your payloads. Like the Data Converters, keys should be mapped to a Namespace in Temporal. ### Using Vault for Key Management [This repository](https://github.com/zboralski/codecserver) provides a robust and complete example of using Temporal with HashiCorp's [Vault](https://www.vaultproject.io/) secrets engine. --- ## Payload Codec This page discusses [Payload Codec](#payload-codec). ## What is a Payload Codec? {#payload-codec} A Payload Codec transforms an array of [Payloads](/dataconversion#payload) (for example, a list of Workflow arguments) into another array of Payloads. When serializing to Payloads, the Payload Converter is applied first to convert your objects to bytes, followed by codecs that convert bytes to bytes. When deserializing from Payloads, codecs are applied first to last to reverse the effect, followed by the Payload Converter. Use a custom Payload Codec to transform your Payloads; for example, implementing compression and/or encryption on your Workflow Execution data. ### Encryption {#encryption} Using end-to-end encryption in your custom Data Converter ensures that sensitive application data is secure when handled by the Temporal Server. Apply your encryption logic in a custom Payload Codec and use it locally to encrypt data. You maintain all the encryption keys, and the Temporal Server sees only encrypted data. Refer to [What is Key Management?](/key-management) for more guidance. Your data exists unencrypted only on the Client and the Worker process that is executing the Workflows and Activities, on hosts that you control. For details, see [Securing your data](/production-deployment/data-encryption). The following samples use encryption (AES GCM with 256-bit key) in a custom Data Converter: - [Go sample](https://github.com/temporalio/samples-go/tree/main/encryption) - [Java sample](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/encryptedpayloads) - [Python sample](https://github.com/temporalio/samples-python/tree/main/encryption) - [TypeScript sample](https://github.com/temporalio/samples-typescript/tree/main/encryption) --- ## Payload Converter This page discusses [Payload Converter](#payload-converter). ## What is a Payload Converter? {#payload-converter} A Payload Converter serializes data, converting values to bytes and back. When you initiate a Workflow Execution through a Client and pass data as input, the input is serialized using a Data Converter that runs it through a set of Payload Converters. When your Workflow Execution starts, this data input is deserialized and passed as input to your Workflow. ### Composite Data Converters {#composite-data-converters} A Composite Data Converter is used to apply custom, type-specific Payload Converters in a specified order. A Composite Data Converter can be comprised of custom rules that you created, and it can also leverage the default Data Converters built into Temporal. In fact, the default Data Converter logic is implemented internally in the Temporal source as a Composite Data Converter. It defines these rules in this order: ```go defaultDataConverter = NewCompositeDataConverter( NewNilPayloadConverter(), NewByteSlicePayloadConverter(), NewProtoJSONPayloadConverter(), NewProtoPayloadConverter(), NewJSONPayloadConverter(), ) ``` The order in which the Payload Converters are applied is important. During serialization, the Data Converter tries the Payload Converters in that specific order until a Payload Converter returns a non-nil Payload. A custom PayloadConverter must implement the functions: - `FromPayload` (for a single value) or - `FromPayloads` (for a list of values) to convert to values from a Payload, and - `ToPayload` (for a single value) or - `ToPayloads` (for a list of values) to convert values to a Payload. Defining a new Composite Data Converter is not always necessary to implement custom data handling. Each SDK allows you to override or configure the default Converter with a custom Payload Codec. --- ## Remote Data Encoding This page discusses [Remote Data Encoding](#remote-data-encoding). ## What is remote data encoding? {#remote-data-encoding} Remote data encoding is exposing your Payload Codec via HTTP endpoints to support remote encoding and decoding. Running your encoding remotely allows you to use it with the [Temporal CLI](/cli) to encode/decode data for several commands including `temporal workflow show` and with Temporal Web UI to decode data in your Workflow Execution details view. To run data encoding/decoding remotely, use a [Codec Server](/codec-server). A Codec Server is an HTTP server that uses your custom Codec logic to decode your data remotely. The Codec Server is independent of the Temporal Service and decodes your encrypted payloads through predefined endpoints. You create, operate, and manage access to your Codec Server in your own environment. The Temporal CLI and the Web UI in turn provide built-in hooks to call the Codec Server to decode encrypted payloads on demand. ### Encoding data on the Web UI and CLI You can perform some operations on your Workflow Execution using the Temporal CLI and the Web UI. For example. you can start or signal an active Workflow Execution from the Temporal CLI or cancel a Workflow Execution from the Web UI, which might require inputs that contain sensitive data. To encode this data, specify your [Codec Server endpoints](/codec-server) with the `codec-endpoint` parameter in [the Temporal CLI](/cli) and configure your Web UI to use the Codec Server endpoints. ### Decoding data on the Web UI and CLI If you use custom encoding, Payload data handled by the Temporal Service is stored encoded. Since the Web UI uses the [Visibility](/temporal-service/visibility) database to show events and data stored on the Temporal Server, all data in the Workflow Execution History in your Web UI is displayed in the encoded format. To decode output when using the Web UI and the Temporal CLI, use a [Codec Server](/codec-server). Note that a remote data encoder is a separate system with access to your encryption keys and exposes APIs to encode and decode any data. Evaluate and ensure that your remote data encoder endpoints are secured and only authorized users have access to them. Samples: - [Go](https://github.com/temporalio/samples-go/tree/main/codec-server) - [Java](https://github.com/temporalio/sdk-java/tree/master/temporal-remote-data-encoder) - [Python](https://github.com/temporalio/samples-python/tree/main/encryption) - [TypeScript](https://github.com/temporalio/samples-typescript/tree/main/encryption) --- ## Detecting Activity failures A Workflow can detect different kinds of Activity Execution failures through the following timeouts: - [Schedule-To-Start Timeout](#schedule-to-start-timeout) - [Start-To-Close Timeout](#start-to-close-timeout) - [Schedule-To-Close Timeout](#schedule-to-close-timeout) - [Activity Heartbeats](#activity-heartbeat) ## Schedule-To-Start Timeout {#schedule-to-start-timeout} **What is a Schedule-To-Start Timeout in Temporal?** A Schedule-To-Start Timeout is the maximum amount of time that is allowed from when an [Activity Task](/tasks#activity-task) is scheduled (that is, placed in a Task Queue) to when a [Worker](/workers#worker) starts (that is, picks up from the Task Queue) that Activity Task. In other words, it's a limit for how long an Activity Task can be enqueued. The moment that the Task is picked by the Worker, from the Task Queue, is considered to be the start of the Activity Task Execution for the purposes of the Schedule-To-Start Timeout and associated metrics. This definition of "Start" avoids issues that a clock difference between the Temporal Service and a Worker might create. "Schedule" in Schedule-To-Start and Schedule-To-Close have different frequency guarantees. The Schedule-To-Start Timeout is enforced for each Activity Task, whereas the Schedule-To-Close Timeout is enforced once per Activity Execution. Thus, "Schedule" in Schedule-To-Start refers to the scheduling moment of _every_ Activity Task in the sequence of Activity Tasks that make up the Activity Execution, while "Schedule" in Schedule-To-Close refers to the _first_ Activity Task in that sequence. A [Retry Policy](/encyclopedia/retry-policies) attached to an Activity Execution retries an Activity Task. This timeout has two primary use cases: 1. Detect whether an individual Worker has crashed. 2. Detect whether the fleet of Workers polling the Task Queue is not able to keep up with the rate of Activity Tasks. **The default Schedule-To-Start Timeout is ∞ (infinity).** If this timeout is used, we recommend setting this timeout to the maximum time a Workflow Execution is willing to wait for an Activity Execution in the presence of all possible Worker outages, and have a concrete plan in place to reroute Activity Tasks to a different Task Queue. This timeout **does not** trigger any retries regardless of the Retry Policy, as a retry would place the Activity Task back into the same Task Queue. We do not recommend using this timeout unless you know what you are doing. In most cases, we recommend monitoring the `temporal_activity_schedule_to_start_latency` metric to know when Workers slow down picking up Activity Tasks, instead of setting this timeout. ## Start-To-Close Timeout {#start-to-close-timeout} **What is a Start-To-Close Timeout in Temporal?** A Start-To-Close Timeout is the maximum time allowed for a single [Activity Task Execution](/tasks#activity-task-execution). **The default Start-To-Close Timeout is the same as the default [Schedule-To-Close Timeout](#schedule-to-close-timeout).** An Activity Execution must have either this timeout (Start-To-Close) or the [Schedule-To-Close Timeout](#schedule-to-close-timeout) set. We recommend always setting this timeout; however, make sure that Start-To-Close Timeout is always set to be longer than the maximum possible time for the Activity Execution to complete. For long running Activity Executions, we recommend also using [Activity Heartbeats](#activity-heartbeat) and [Heartbeat Timeouts](#heartbeat-timeout). :::tip We strongly recommend setting a Start-To-Close Timeout. The Temporal Server doesn't detect failures when a Worker loses communication with the Server or crashes. Therefore, the Temporal Server relies on the Start-To-Close Timeout to force Activity retries. ::: The main use case for the Start-To-Close timeout is to detect when a Worker crashes after it has started executing an Activity Task. A [Retry Policy](/encyclopedia/retry-policies) attached to an Activity Execution retries an Activity Task Execution. Thus, the Start-To-Close Timeout is applied to each Activity Task Execution within an Activity Execution. If the first Activity Task Execution returns an error the first time, then the full Activity Execution might look like this: If this timeout is reached, the following actions occur: - An [ActivityTaskTimedOut](/references/events#activitytasktimedout) Event is written to the Workflow Execution's mutable state. - If a Retry Policy dictates a retry, the Temporal Service schedules another Activity Task. - The attempt count increments by 1 in the Workflow Execution's mutable state. - The Start-To-Close Timeout timer is reset. ## Schedule-To-Close Timeout {#schedule-to-close-timeout} **What is a Schedule-To-Close Timeout in Temporal?** A Schedule-To-Close Timeout is the maximum amount of time allowed for the overall [Activity Execution](/activity-execution), from when the first [Activity Task](/tasks#activity-task) is scheduled to when the last Activity Task, in the chain of Activity Tasks that make up the Activity Execution, reaches a Closed status. Example Schedule-To-Close Timeout period for an Activity Execution that has a chain Activity Task Executions: **The default Schedule-To-Close Timeout is ∞ (infinity).** An Activity Execution must have either this timeout (Schedule-To-Close) or [Start-To-Close](#start-to-close-timeout) set. This timeout can be used to control the overall duration of an Activity Execution in the face of failures (repeated Activity Task Executions), without altering the Maximum Attempts field of the Retry Policy. :::tip We strongly recommend setting a Start-To-Close Timeout. The Temporal Server doesn't detect failures when a Worker loses communication with the Server or crashes. Therefore, the Temporal Server relies on the Start-To-Close Timeout to force Activity retries. ::: ## Activity Heartbeat {#activity-heartbeat} **What is an Activity Heartbeat in Temporal?** An Activity Heartbeat is a ping from the Worker that is executing the Activity to the Temporal Service. Each ping informs the Temporal Service that the Activity Execution is making progress and the Worker has not crashed. Activity Heartbeats work in conjunction with a [Heartbeat Timeout](#heartbeat-timeout). Activity Heartbeats are implemented within the Activity Definition. Custom progress information can be included in the Heartbeat which can then be used by the Activity Execution should a retry occur. An Activity Heartbeat can be recorded as often as needed (e.g. once a minute or every loop iteration). It is often a good practice to Heartbeat on anything but the shortest Activity Function Execution. Temporal SDKs control the rate at which Heartbeats are sent to the Temporal Service. Heartbeating is not required from [Local Activities](/local-activity), and does nothing. For _long-running_ Activities, we recommend using a relatively short Heartbeat Timeout and a frequent Heartbeat. That way if a Worker fails it can be handled in a timely manner. A Heartbeat can include an application layer payload that can be used to _save_ Activity Execution progress. If an [Activity Task Execution](/tasks#activity-task-execution) times out due to a missed Heartbeat, the next Activity Task can access and continue with that payload. Activity Cancellations are delivered to Activities from the Temporal Service when they Heartbeat. Activities that don't Heartbeat can't receive a Cancellation. Heartbeat throttling may lead to Cancellation getting delivered later than expected. ### Throttling Heartbeats may not always be sent to the Temporal Service—they may be throttled by the Worker. The throttle interval is the smaller of the following: - If `heartbeatTimeout` is provided, `heartbeatTimeout * 0.8`; otherwise, `defaultHeartbeatThrottleInterval` - `maxHeartbeatThrottleInterval` `defaultHeartbeatThrottleInterval` is 30 seconds by default, and `maxHeartbeatThrottleInterval` is 60 seconds by default. Each can be set in Worker options. Throttling is implemented as follows: - After sending a Heartbeat, the Worker sets a timer for the throttle interval. - The Worker stops sending Heartbeats, but continues receiving Heartbeats from the Activity and remembers the most recent one. - When the timer fires, the Worker: - Sends the most recent Heartbeat. - Sets the timer again. Throttling allows the Worker to reduce network traffic and load on the Temporal Service by suppressing Heartbeats that aren’t necessary to prevent a Heartbeat Timeout. Throttling does not apply to the final Heartbeat message in the case of Activity Failure. If an Activity fails just after recording progress information in a Heartbeat message, that progress information will be available during the next retry attempt, provided that the Worker itself did not crash before delivering it to the Temporal Service. ### Which Activities should Heartbeat? Heartbeating is best thought about not in terms of time, but in terms of "How do you know you are making progress?" For short-term operations, progress updates are not a requirement. However, checking the progress and status of Activity Executions that run over long periods is almost always useful. Consider the following when setting Activity Hearbeats: - Your underlying task must be able to report definite progress. Note that your Workflow cannot read this progress information while the Activity is still executing (or it would have to store it in Event History). You can report progress to external sources if you need it exposed to the user. - Your Activity Execution is long-running, and you need to verify whether the Worker that is processing your Activity is still alive and has not run out of memory or silently crashed. For example, the following scenarios are suitable for Heartbeating: - Reading a large file from Amazon S3. - Running a ML training job on some local GPUs. And the following scenarios are not suitable for Heartbeating: - Making a quick API call. - Reading a small file from disk. ### Heartbeat Timeout {#heartbeat-timeout} **What is a Heartbeat Timeout in Temporal?** A Heartbeat Timeout is the maximum time between [Activity Heartbeats](#activity-heartbeat). If this timeout is reached, the Activity Task fails and a retry occurs if a [Retry Policy](/encyclopedia/retry-policies) dictates it. --- ## Detecting application failures In Temporal, timeouts detect application failures. The system can then automatically mitigate these failures through retries. Both Workflows and Activities have dedicated timeout configurations and can be configured with a RetryPolicy. - [Detecting Workflow failures](/encyclopedia/detecting-workflow-failures) - [Detecting Activity failures](/encyclopedia/detecting-activity-failures) - [Retry Policies](/encyclopedia/retry-policies) --- ## Detecting Workflow failures Each Workflow Timeout controls the maximum duration of a different aspect of a Workflow Execution. Workflow Timeouts are set when starting the Workflow Execution. Before we continue, we want to note that we generally do not recommend setting Workflow Timeouts, because Workflows are designed to be long-running and resilient. Instead, setting a Timeout can limit its ability to handle unexpected delays or long-running processes. If you need to perform an action inside your Workflow after a specific period of time, we recommend using a Timer. - [Workflow Execution Timeout](#workflow-execution-timeout) - [Workflow Run Timeout](#workflow-run-timeout) - [Workflow Task Timeout](#workflow-task-timeout) ## Workflow Execution Timeout {#workflow-execution-timeout} **What is a Workflow Execution Timeout in Temporal?** A Workflow Execution Timeout is the maximum time that a Workflow Execution can be executing (have an Open status) including retries and any usage of Continue As New. **The default value is ∞ (infinite).** If this timeout is reached, the Workflow Execution changes to a Timed Out status. This timeout is different from the [Workflow Run Timeout](#workflow-run-timeout). This timeout is most commonly used for stopping the execution of a [Temporal Cron Job](/cron-job) after a certain amount of time has passed. ## Workflow Run Timeout {#workflow-run-timeout} **What is a Workflow Run Timeout in Temporal?** A Workflow Run is the instance of a specific Workflow Execution. Due to the potential for Workflow Retries or Continue-as-New, a Workflow Execution may have multiple Workflow runs. For example, if a Workflow that specifies a Retry Policy initially fails and then succeeds during the next retry attempt, there is a single Workflow Execution that spans two Workflow Runs. Both runs will share the same Workflow ID but have a unique Run ID to distinguish them. A Workflow Run Timeout restricts the maximum duration of a single Workflow Run. If the Workflow Run Timeout is reached, the Workflow Execution will be Timed Out. Because this Timeout only applies to an individual Workflow Run, this does not include retries or Continue-As-New. **The default is set to the same value as the [Workflow Execution Timeout](#workflow-execution-timeout).** This timeout is most commonly used to limit the execution time of a single [Temporal Cron Job Execution](/cron-job). If the Workflow Run Timeout is reached, the Workflow Execution will be Timed Out. ## Workflow Task Timeout {#workflow-task-timeout} **What is a Workflow Task Timeout in Temporal?** A Workflow Task Timeout is the maximum amount of time allowed for a [Worker](/workers#worker) to execute a [Workflow Task](/tasks#workflow-task) after the Worker has pulled that Workflow Task from the [Task Queue](/task-queue). This Timeout is primarily available to recognize whether a Worker has gone down so that the Workflow Execution can be recovered on a different Worker. **The default value is 10 seconds.** This timeout is primarily available to recognize whether a Worker has gone down so that the Workflow Execution can be recovered on a different Worker. The main reason for increasing the default value is to accommodate a Workflow Execution that has an extensive Workflow Execution History, requiring more than 10 seconds for the Worker to load. It's worth mentioning that although you can extend the timeout up to the maximum value of 120 seconds, it's not recommended to move beyond the default value. ## Detecting Workflow Task Failures Use the `TemporalReportedProblems` Search Attribute to detect Workflows with failed Workflow Tasks. A failed Workflow Task does not cause the Workflow to fail. Some Tasks within a Workflow may be intended to fail. For example, a Workflow Task may check a remote data source for new messages. If there aren't any, the Task will fail as intended. If your Task has code to handle the failure, the Workflow will proceed. However, if your Workflow has a Task that fails and the failure is not handled, the Workflow will continue to run, but will not complete. Detecting Workflows in this state is a common troubleshooting issue. To identify Workflows with Task failures, you can use the Temporal Web UI. See [Task Failures View](/web-ui/#task-failures-view) for more details. You can also detect Workflows with Task failures by searching for the `TemporalReportedProblems` search attribute with your observability tools. :::warning Activating Workflow Task Failure To enable the Task Failures View for a Namespace, you need to update the Dynamic Config for that Namespace. See [Activating Task Failures View](/web-ui/#activate-task-failures-view). ::: --- ## Event History Walkthrough with the .NET SDK In order to understand how Workflow Replay works, this page will go through the following walkthroughs: 1. [How Workflow Code Maps to Commands](#How-Workflow-Code-Maps-To-Commands) 2. [How Workflow Commands Map to Events](#How-Workflow-Commands-Map-To-Events) 3. [How History Replay Provides Durable Execution](#How-History-Replay-Provides-Durable-Execution) 4. [Example of a Non-Deterministic Workflow](#Example-of-Non-Deterministic-Workflow) ## How Workflow Code Maps to Commands {#How-Workflow-Code-Maps-To-Commands} This walkthrough will cover how the Workflow code maps to Commands that get sent to the Temporal Service, letting the Temporal Service know what to do. ## How Workflow Commands Map to Events {#How-Workflow-Commands-Map-To-Events} The Commands that are sent to the Temporal Service are then turned into Events, which build up the Event History. The Event History is a detailed log of Events that occur during the lifecycle of a Workflow Execution, such as the execution of Workflow Tasks or Activity Tasks. Event Histories are persisted to the database used by the Temporal Service, so they're durable, and will even survive a crash of the Temporal Service itself. These Events are what are used to recreate a Workflow Execution's state in the case of failure. ## How History Replay Provides Durable Execution {#How-History-Replay-Provides-Durable-Execution} Now that you have seen how code maps to Commands, and how Commands map to Events, this next walkthrough will take a look at how Temporal uses Replay with the Events to provide Durable Execution and restore a Workflow Execution in the case of a failure. This code walkthrough will begin by walking through a Workflow Execution, describing how the code maps to Commands and Events. There will then be a Worker crash halfway through, explaining how Temporal uses Replay to recover the state of the Workflow Execution, ultimately resulting in a completed execution that's identical to one that had not crashed. ## Example of a Non-Deterministic Workflow {#Example-of-Non-Deterministic-Workflow} Now that Replay has been covered, this section will explain why Workflows need to be [deterministic](https://docs.temporal.io/workflows#deterministic-constraints) in order for Replay to work. A Workflow is deterministic if every execution of its Workflow Definition produces the same Commands in the same sequence given the same input. As mentioned in the [`How History Replay Provides Durable Execution`](#How-History-Replay-Provides-Durable-Execution) walkthrough, in the case of a failure, a Worker requests the Event History to replay it. During Replay, the Worker runs the Workflow code again to produce a set of Commands which is compared against the sequence of Commands in the Event History. When there’s a mismatch between the expected sequence of Commands the Worker expects based on the Event History and the actual sequence produced during Replay (due to non-determinism), Replay will be unable to continue. To better understand why Workflows need to be deterministic, it's helpful to look at a Workflow Definition that violates it. In this case, this code will walk through a Workflow Definition that breaks the determinism constraint with a random number generator. Note that non-deterministic failures do not fail the Workflow Execution by default. A non-deterministic failure is considered a [Workflow Task Failure](https://docs.temporal.io/references/failures#workflow-task-failures) which is considered a transient failure, meaning it retries over and over. Users can also fix the source of non-determinism, perhaps by removing the Activity, and then restart the Workers. This means that this type of failure can recover by itself. You can also use a strategy called versioning to address this non-determinism error. See [versioning](https://docs.temporal.io/develop/dotnet/versioning) to learn more. For more information on how Temporal handles Durable Execution or to see these slides in a video format with more explanation, check out our free, self-paced courses: [Temporal 102](https://learn.temporal.io/courses/temporal_102/) and [Versioning Workflows](https://learn.temporal.io/courses/versioning/). ## Temporal Applications Support Non-Deterministic Operations We want to emphasize that although your Workflows themselves need to be deterministic, your application itself does not! Remember that pretty much anything that interacts with the external world is inherently non-deterministic: - Calling LLM APIs - Querying databases - Reading or writing files - Making HTTP requests to external services **Good news**: Your Temporal application can absolutely handle all of these operations. While your Workflow must be deterministic, your application absolutely can handle any type of non-deterministic operation, including those listed above. This gives you the best of both worlds—the crash-proof reliability of a Workflow and the resiliency of Activities with have built-in support for retries. --- ## Event History With Temporal, your Workflows can seamlessly recover from crashes. This is made possible by the [Event History](https://docs.temporal.io/workflow-execution/event), a complete and durable log of everything that has happened in the lifecycle of a Workflow Execution, as well as the ability of the Temporal Service to durably persist the Events during Replay. Temporal uses the Event History to record every step taken along the way. Each time your Workflow Definition makes an API call to execute an Activity or start a Timer for instance, it doesn’t perform the action directly. Instead, it sends a Command to the Temporal Service. A Command is a requested action issued by a Worker to the Temporal Service after a Workflow Task Execution completes. The Temporal Service will act on these Commands such as scheduling an Activity or scheduling a timer. These Commands are then mapped to Events which are persisted in case of failure. For example, if the Worker crashes, the Worker uses the Event History to replay the code and recreate the state of the Workflow Execution to what it was immediately before the crash. It then resumes progress from the point of failure as if the failure never occurred. For a deep dive on how the Event History works, refer to the walkthroughs in the dropdown. - [Go](/encyclopedia/event-history/event-history-go) - [Java](/encyclopedia/event-history/event-history-java) - [Python](/encyclopedia/event-history/event-history-python) - [Typescript](/encyclopedia/event-history/event-history-typescript) - [.NET](/encyclopedia/event-history/event-history-dotnet) --- ## Event History Walkthrough with the Go SDK In order to understand how Workflow Replay works, this page will go through the following walkthroughs: 1. [How Workflow Code Maps to Commands](#How-Workflow-Code-Maps-To-Commands) 2. [How Workflow Commands Map to Events](#How-Workflow-Commands-Map-To-Events) 3. [How History Replay Provides Durable Execution](#How-History-Replay-Provides-Durable-Execution) 4. [Example of a Non-Deterministic Workflow](#Example-of-Non-Deterministic-Workflow) ## How Workflow Code Maps to Commands {#How-Workflow-Code-Maps-To-Commands} This walkthrough will cover how the Workflow code maps to Commands that get sent to the Temporal Service, letting the Temporal Service know what to do. ## How Workflow Commands Map to Events {#How-Workflow-Commands-Map-To-Events} The Commands that are sent to the Temporal Service are then turned into Events, which build up the Event History. The Event History is a detailed log of Events that occur during the lifecycle of a Workflow Execution, such as the execution of Workflow Tasks or Activity Tasks. Event Histories are persisted to the database used by the Temporal Service, so they're durable, and will even survive a crash of the Temporal Service itself. These Events are what are used to recreate a Workflow Execution's state in the case of failure. ## How History Replay Provides Durable Execution {#How-History-Replay-Provides-Durable-Execution} Now that you have seen how code maps to Commands, and how Commands map to Events, this next walkthrough will take a look at how Temporal uses Replay with the Events to provide Durable Execution and restore a Workflow Execution in the case of a failure. This code walkthrough will begin by walking through a Workflow Execution, describing how the code maps to Commands and Events. There will then be a Worker crash halfway through, explaining how Temporal uses Replay to recover the state of the Workflow Execution, ultimately resulting in a completed execution that's identical to one that had not crashed. ## Example of a Non-Deterministic Workflow {#Example-of-Non-Deterministic-Workflow} Now that Replay has been covered, this section will explain why Workflows need to be [deterministic](https://docs.temporal.io/workflows#deterministic-constraints) in order for Replay to work. A Workflow is deterministic if every execution of its Workflow Definition produces the same Commands in the same sequence given the same input. As mentioned in the [`How History Replay Provides Durable Execution`](#How-History-Replay-Provides-Durable-Execution) walkthrough, in the case of a failure, a Worker requests the Event History to replay it. During Replay, the Worker runs the Workflow code again to produce a set of Commands which is compared against the sequence of Commands in the Event History. When there’s a mismatch between the expected sequence of Commands the Worker expects based on the Event History and the actual sequence produced during Replay (due to non-determinism), Replay will be unable to continue. To better understand why Workflows need to be deterministic, it's helpful to look at a Workflow Definition that violates it. In this case, this code will walk through a Workflow Definition that breaks the determinism constraint with a random number generator. Note that non-deterministic failures do not fail the Workflow Execution by default. A non-deterministic failure is considered a [Workflow Task Failure](https://docs.temporal.io/references/failures#workflow-task-failures) which is considered a transient failure, meaning it retries over and over. Users can also fix the source of non-determinism, perhaps by removing the Activity, and then restart the Workers. This means that this type of failure can recover by itself. You can also use a strategy called versioning to address this non-determinism error. See [versioning](https://docs.temporal.io/develop/go/versioning) to learn more. For more information on how Temporal handles Durable Execution or to see these slides in a video format with more explanation, check out our free, self-paced courses: [Temporal 102](https://learn.temporal.io/courses/temporal_102/) and [Versioning Workflows](https://learn.temporal.io/courses/versioning/). ## Temporal Applications Support Non-Deterministic Operations We want to emphasize that although your Workflows themselves need to be deterministic, your application itself does not! Remember that pretty much anything that interacts with the external world is inherently non-deterministic: - Calling LLM APIs - Querying databases - Reading or writing files - Making HTTP requests to external services **Good news**: Your Temporal application can absolutely handle all of these operations. While your Workflow must be deterministic, your application absolutely can handle any type of non-deterministic operation, including those listed above. This gives you the best of both worlds—the crash-proof reliability of a Workflow and the resiliency of Activities with have built-in support for retries. --- ## Event History Walkthrough with the Java SDK In order to understand how Workflow Replay works, this page will go through the following walkthroughs: 1. [How Workflow Code Maps to Commands](#How-Workflow-Code-Maps-To-Commands) 2. [How Workflow Commands Map to Events](#How-Workflow-Commands-Map-To-Events) 3. [How History Replay Provides Durable Execution](#How-History-Replay-Provides-Durable-Execution) 4. [Example of a Non-Deterministic Workflow](#Example-of-Non-Deterministic-Workflow) ## How Workflow Code Maps to Commands {#How-Workflow-Code-Maps-To-Commands} This walkthrough will cover how the Workflow code maps to Commands that get sent to the Temporal Service, letting the Temporal Service know what to do. ## How Workflow Commands Map to Events {#How-Workflow-Commands-Map-To-Events} The Commands that are sent to the Temporal Service are then turned into Events, which build up the Event History. The Event History is a detailed log of Events that occur during the lifecycle of a Workflow Execution, such as the execution of Workflow Tasks or Activity Tasks. Event Histories are persisted to the database used by the Temporal Service, so they're durable, and will even survive a crash of the Temporal Service itself. These Events are what are used to recreate a Workflow Execution's state in the case of failure. ## How History Replay Provides Durable Execution {#How-History-Replay-Provides-Durable-Execution} Now that you have seen how code maps to Commands, and how Commands map to Events, this next walkthrough will take a look at how Temporal uses Replay with the Events to provide Durable Execution and restore a Workflow Execution in the case of a failure. This code walkthrough will begin by walking through a Workflow Execution, describing how the code maps to Commands and Events. There will then be a Worker crash halfway through, explaining how Temporal uses Replay to recover the state of the Workflow Execution, ultimately resulting in a completed execution that's identical to one that had not crashed. ## Example of a Non-Deterministic Workflow {#Example-of-Non-Deterministic-Workflow} Now that Replay has been covered, this section will explain why Workflows need to be [deterministic](https://docs.temporal.io/workflows#deterministic-constraints) in order for Replay to work. A Workflow is deterministic if every execution of its Workflow Definition produces the same Commands in the same sequence given the same input. As mentioned in the [`How History Replay Provides Durable Execution`](#How-History-Replay-Provides-Durable-Execution) walkthrough, in the case of a failure, a Worker requests the Event History to replay it. During Replay, the Worker runs the Workflow code again to produce a set of Commands which is compared against the sequence of Commands in the Event History. When there’s a mismatch between the expected sequence of Commands the Worker expects based on the Event History and the actual sequence produced during Replay (due to non-determinism), Replay will be unable to continue. To better understand why Workflows need to be deterministic, it's helpful to look at a Workflow Definition that violates it. In this case, this code will walk through a Workflow Definition that breaks the determinism constraint with a random number generator. Note that non-deterministic failures do not fail the Workflow Execution by default. A non-deterministic failure is considered a [Workflow Task Failure](https://docs.temporal.io/references/failures#workflow-task-failures) which is considered a transient failure, meaning it retries over and over. Users can also fix the source of non-determinism, perhaps by removing the Activity, and then restart the Workers. This means that this type of failure can recover by itself. You can also use a strategy called versioning to address this non-determinism error. See [versioning](https://docs.temporal.io/develop/java/versioning) to learn more. For more information on how Temporal handles Durable Execution or to see these slides in a video format with more explanation, check out our free, self-paced courses: [Temporal 102](https://learn.temporal.io/courses/temporal_102/) and [Versioning Workflows](https://learn.temporal.io/courses/versioning/). ## Temporal Applications Support Non-Deterministic Operations We want to emphasize that although your Workflows themselves need to be deterministic, your application itself does not! Remember that pretty much anything that interacts with the external world is inherently non-deterministic: - Calling LLM APIs - Querying databases - Reading or writing files - Making HTTP requests to external services **Good news**: Your Temporal application can absolutely handle all of these operations. While your Workflow must be deterministic, your application absolutely can handle any type of non-deterministic operation, including those listed above. This gives you the best of both worlds—the crash-proof reliability of a Workflow and the resiliency of Activities with have built-in support for retries. --- ## Event History Walkthrough with the Python SDK In order to understand how Workflow Replay works, this page will go through the following walkthroughs: 1. [How Workflow Code Maps to Commands](#How-Workflow-Code-Maps-To-Commands) 2. [How Workflow Commands Map to Events](#How-Workflow-Commands-Map-To-Events) 3. [How History Replay Provides Durable Execution](#How-History-Replay-Provides-Durable-Execution) 4. [Example of a Non-Deterministic Workflow](#Example-of-Non-Deterministic-Workflow) ## How Workflow Code Maps to Commands {#How-Workflow-Code-Maps-To-Commands} This walkthrough will cover how the Workflow code maps to Commands that get sent to the Temporal Service, letting the Temporal Service know what to do. ## How Workflow Commands Map to Events {#How-Workflow-Commands-Map-To-Events} The Commands that are sent to the Temporal Service are then turned into Events, which build up the Event History. The Event History is a detailed log of Events that occur during the lifecycle of a Workflow Execution, such as the execution of Workflow Tasks or Activity Tasks. Event Histories are persisted to the database used by the Temporal Service, so they're durable, and will even survive a crash of the Temporal Service itself. These Events are what are used to recreate a Workflow Execution's state in the case of failure. ## How History Replay Provides Durable Execution {#How-History-Replay-Provides-Durable-Execution} Now that you have seen how code maps to Commands, and how Commands map to Events, this next walkthrough will take a look at how Temporal uses Replay with the Events to provide Durable Execution and restore a Workflow Execution in the case of a failure. This code walkthrough will begin by walking through a Workflow Execution, describing how the code maps to Commands and Events. There will then be a Worker crash halfway through, explaining how Temporal uses Replay to recover the state of the Workflow Execution, ultimately resulting in a completed execution that's identical to one that had not crashed. ## Example of a Non-Deterministic Workflow {#Example-of-Non-Deterministic-Workflow} Now that Replay has been covered, this section will explain why Workflows need to be [deterministic](https://docs.temporal.io/workflows#deterministic-constraints) in order for Replay to work. A Workflow is deterministic if every execution of its Workflow Definition produces the same Commands in the same sequence given the same input. As mentioned in the [`How History Replay Provides Durable Execution`](#How-History-Replay-Provides-Durable-Execution) walkthrough, in the case of a failure, a Worker requests the Event History to replay it. During Replay, the Worker runs the Workflow code again to produce a set of Commands which is compared against the sequence of Commands in the Event History. When there’s a mismatch between the expected sequence of Commands the Worker expects based on the Event History and the actual sequence produced during Replay (due to non-determinism), Replay will be unable to continue. To better understand why Workflows need to be deterministic, it's helpful to look at a Workflow Definition that violates it. In this case, this code will walk through a Workflow Definition that breaks the determinism constraint with a random number generator. Note that non-deterministic failures do not fail the Workflow Execution by default. A non-deterministic failure is considered a [Workflow Task Failure](https://docs.temporal.io/references/failures#workflow-task-failures) which is considered a transient failure, meaning it retries over and over. Users can also fix the source of non-determinism, perhaps by removing the Activity, and then restart the Workers. This means that this type of failure can recover by itself. You can also use a strategy called versioning to address this non-determinism error. See [versioning](https://docs.temporal.io/develop/python/versioning) to learn more. For more information on how Temporal handles Durable Execution or to see these slides in a video format with more explanation, check out our free, self-paced courses: [Temporal 102](https://learn.temporal.io/courses/temporal_102/) and [Versioning Workflows](https://learn.temporal.io/courses/versioning/). ## Temporal Applications Support Non-Deterministic Operations We want to emphasize that although your Workflows themselves need to be deterministic, your application itself does not! Remember that pretty much anything that interacts with the external world is inherently non-deterministic: - Calling LLM APIs - Querying databases - Reading or writing files - Making HTTP requests to external services **Good news**: Your Temporal application can absolutely handle all of these operations. While your Workflow must be deterministic, your application absolutely can handle any type of non-deterministic operation, including those listed above. This gives you the best of both worlds—the crash-proof reliability of a Workflow and the resiliency of Activities with have built-in support for retries. --- ## Event History Walkthrough with the TypeScript SDK In order to understand how Workflow Replay works, this page will go through the following walkthroughs: 1. [How Workflow Code Maps to Commands](#How-Workflow-Code-Maps-To-Commands) 2. [How Workflow Commands Map to Events](#How-Workflow-Commands-Map-To-Events) 3. [How History Replay Provides Durable Execution](#How-History-Replay-Provides-Durable-Execution) 4. [Example of a Non-Deterministic Workflow](#Example-of-Non-Deterministic-Workflow) ## How Workflow Code Maps to Commands {#How-Workflow-Code-Maps-To-Commands} This walkthrough will cover how the Workflow code maps to Commands that get sent to the Temporal Service, letting the Temporal Service know what to do. ## How Workflow Commands Map to Events {#How-Workflow-Commands-Map-To-Events} The Commands that are sent to the Temporal Service are then turned into Events, which build up the Event History. The Event History is a detailed log of Events that occur during the lifecycle of a Workflow Execution, such as the execution of Workflow Tasks or Activity Tasks. Event Histories are persisted to the database used by the Temporal Service, so they're durable, and will even survive a crash of the Temporal Service itself. These Events are what are used to recreate a Workflow Execution's state in the case of failure. ## How History Replay Provides Durable Execution {#How-History-Replay-Provides-Durable-Execution} Now that you have seen how code maps to Commands, and how Commands map to Events, this next walkthrough will take a look at how Temporal uses Replay with the Events to provide Durable Execution and restore a Workflow Execution in the case of a failure. This code walkthrough will begin by walking through a Workflow Execution, describing how the code maps to Commands and Events. There will then be a Worker crash halfway through, explaining how Temporal uses Replay to recover the state of the Workflow Execution, ultimately resulting in a completed execution that's identical to one that had not crashed. ## Example of a Non-Deterministic Workflow {#Example-of-Non-Deterministic-Workflow} Now that Replay has been covered, this section will explain why Workflows need to be [deterministic](https://docs.temporal.io/workflows#deterministic-constraints) in order for Replay to work. A Workflow is deterministic if every execution of its Workflow Definition produces the same Commands in the same sequence given the same input. As mentioned in the [`How History Replay Provides Durable Execution`](#How-History-Replay-Provides-Durable-Execution) walkthrough, in the case of a failure, a Worker requests the Event History to replay it. During Replay, the Worker runs the Workflow code again to produce a set of Commands which is compared against the sequence of Commands in the Event History. When there’s a mismatch between the expected sequence of Commands the Worker expects based on the Event History and the actual sequence produced during Replay (due to non-determinism), Replay will be unable to continue. To better understand why Workflows need to be deterministic, it's helpful to look at a Workflow Definition that violates it. In this case, this code will walk through a Workflow Definition that breaks the determinism constraint with a random number generator. Note that non-deterministic failures do not fail the Workflow Execution by default. A non-deterministic failure is considered a [Workflow Task Failure](https://docs.temporal.io/references/failures#workflow-task-failures) which is considered a transient failure, meaning it retries over and over. Users can also fix the source of non-determinism, perhaps by removing the Activity, and then restart the Workers. This means that this type of failure can recover by itself. You can also use a strategy called versioning to address this non-determinism error. See [versioning](https://docs.temporal.io/develop/typescript/versioning) to learn more. For more information on how Temporal handles Durable Execution or to see these slides in a video format with more explanation, check out our free, self-paced courses: [Temporal 102](https://learn.temporal.io/courses/temporal_102/) and [Versioning Workflows](https://learn.temporal.io/courses/versioning/). ## Temporal Applications Support Non-Deterministic Operations We want to emphasize that although your Workflows themselves need to be deterministic, your application itself does not! Remember that pretty much anything that interacts with the external world is inherently non-deterministic: - Calling LLM APIs - Querying databases - Reading or writing files - Making HTTP requests to external services **Good news**: Your Temporal application can absolutely handle all of these operations. While your Workflow must be deterministic, your application absolutely can handle any type of non-deterministic operation, including those listed above. This gives you the best of both worlds—the crash-proof reliability of a Workflow and the resiliency of Activities with have built-in support for retries. --- ## Temporal Encyclopedia [Temporal](/evaluate/why-temporal) provides developers a suite of effective tools for building reliable applications at scale. The following Encyclopedia pages describe the concepts, components, and features of Temporal in detail: - [Temporal](/temporal) - [Temporal SDKs](/encyclopedia/temporal-sdks) - [Workflows](/workflows) - [Activities](/activities) - [Detecting application failures](/encyclopedia/detecting-application-failures) - [Workers](/workers) - [Event History](/encyclopedia/event-history/) - [Workflow Message Passing](/encyclopedia/workflow-message-passing/) - [Child Workflows](/child-workflows) - [Visibility](/visibility) - [Temporal Service](/temporal-service) - [Namespaces](/namespaces) - [Temporal Nexus](/nexus) - [Data conversion](/dataconversion) For a complete list of Temporal terms, see the [Glossary](/glossary). For information on how to implement the developer-facing features see the [Develop](/develop) section. For information on how to use Temporal Cloud see the [Temporal Cloud production deployment](/cloud) section. For information on how to self-host a Temporal Service see the [Self-hosted production deployment](/self-hosted-guide) section. --- ## Global Namespace This page provides an overview of Global Namespace. ## What is a Global Namespace? {#global-namespace} A Global Namespace is a [Namespace](/namespaces) that exists across Clusters when [Multi-Cluster Replication](/temporal-service/multi-cluster-replication) is set up. - [How to register a Global Namespace](/cli/operator#create) - [How to change the active Cluster for a Global Namespace](/cli/operator#update) The Global Namespace feature enables Workflow Executions to progress through another Cluster in the event of a failover. A Global Namespace may be replicated to any number of Clusters, but is active in only one Cluster at any given time. For a failover to be successful, Worker Processes must be polling for Tasks for the Global Namespace on all Clusters. A Global Namespace has a failover version. Because a failover can be triggered from any Cluster, the failover version prevents certain conflicts from occurring if a failover is mistakenly triggered simultaneously on two Clusters. Only the active Cluster dispatches [Tasks](/tasks#task); however, certain conflicts are possible. Unlike regular Namespaces, which provide at-most-once semantics for an Activity Execution, Global Namespaces can support only at-least-once semantics (see [Conflict resolution](/temporal-service/multi-cluster-replication#conflict-resolution)). Worker Processes on the standby Clusters are idle until a failover occurs and their Cluster becomes active. Temporal Application API calls made to a non-active Cluster are rejected with a **NamespaceNotActiveError** which contains the name of the current active Cluster. It is the responsibility of the Temporal Application to call the Cluster that is currently active. --- ## Temporal Namespace :::info Open source and Temporal Cloud This page covers core namespace concepts that apply to both open source Temporal and Temporal Cloud. Temporal Cloud namespaces include additional capabilities, such as [API key](/cloud/api-keys) and [mTLS authentication](/cloud/certificates), [built-in role-based access controls](/cloud/users#namespace-level-permissions), [high availability replication](/cloud/high-availability), and [namespace tags](/cloud/namespaces#tag-a-namespace). Moving from self-hosting to Cloud, or the reverse, requires zero code changes and incurs zero downtime. ::: A Namespace is a unit of isolation within the [Temporal Platform](/temporal#temporal-platform). [Task Queues](/task-queue) and [Workflow Executions](/workflow-execution) belong to a Namespace. When a Workflow Execution is spawned, it does so within a specific Namespace. ## Usage - **Workflow ID uniqueness**: Temporal guarantees a unique Workflow Id within a Namespace. Workflow Executions may have the same Workflow Id if they are in different Namespaces. - **Resource isolation**: Heavy traffic from one Namespace will not impact other Namespaces running on the same Temporal Service. - **Configuration boundaries**: Options like the [Retention Period](/temporal-service/temporal-server#retention-period) and [Archival](/temporal-service/archival) destination are configured per Namespace. - **Default Namespace**: If no Namespace is specified, the Temporal Service uses the Namespace "default" for all Temporal SDKs and the Temporal CLI. You must create a Namespace before using it in your Client. - **Multi-tenancy**: A single Namespace is still multi-tenant. Multiple applications or teams can share a Namespace, but must coordinate on Workflow ID and Task Queue naming to avoid conflicts. ## Namespace operations For how to create and manage Namespaces: - **Open source Temporal**: [Managing Namespaces](/self-hosted-guide/namespaces) - **Temporal Cloud**: [Temporal Cloud Namespaces](/cloud/namespaces) --- ## Nexus Endpoints :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability) for [Temporal Cloud](/cloud/nexus) and [self-hosted deployments](/production-deployment/self-hosted-guide/nexus). ::: A [Temporal Nexus Endpoint](/glossary#nexus-endpoint) is a reverse proxy that can route Nexus requests from a caller Workflow to an upstream target Namespace and Task Queue. A [Nexus Service](/nexus/services) runs in a Worker that is polling the Endpoint's target Task Queue. An Endpoint decouples the caller and handler, so the caller only needs to know the Endpoint name. The Endpoint encapsulates the upstream target Namespace and Task Queue from the caller. A Worker handles Nexus requests by registering one or more [Nexus Services](/nexus/services) and polling the Endpoint's target Task Queue. Multiple Nexus Endpoints can target different Task Queues in the same target Namespace. ## Reverse proxy for Nexus Services, not a general purpose L7 proxy A [Temporal Nexus Endpoint](/glossary#nexus-endpoint) is a reverse proxy for [Nexus Services](/nexus/services). It is not a general purpose L7 reverse proxy like NGINX which can route arbitrary HTTP requests to different upstream targets. A Nexus Endpoint currently supports routing Nexus requests to a single upstream target. The Temporal Nexus [EndpointSpec](https://github.com/temporalio/api/blob/2a5b3951e71565e28628edea1b3d88d69ed26607/temporal/api/nexus/v1/message.proto#L170) has two [Endpoint target types](https://github.com/temporalio/api/blob/2a5b3951e71565e28628edea1b3d88d69ed26607/temporal/api/nexus/v1/message.proto#L185): - Worker: route Nexus requests to a target Namespace and Task Queue. - External (experimental): route Nexus requests to an external target [Nexus RPC endpoint](https://github.com/nexus-rpc/api/blob/main/SPEC.md) with experimental support in `temporal operator nexus create endpoint` for `--target-url` which may be used with the Nexus Registry in a self-hosted Temporal Service. ## Deploying a Nexus Endpoint Adding a Nexus Endpoint to the [Nexus Registry](/nexus/registry) deploys the Endpoint in the Temporal Service, so it is available at runtime to serve Nexus requests. --- ## Error Handling - Temporal Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability) for [Temporal Cloud](/cloud/nexus) and [self-hosted deployments](/production-deployment/self-hosted-guide/nexus). ::: Nexus Operations can return an error for a caller Workflow to handle. If an asynchronous Nexus Operation starts a Workflow that returns an error, it will be propagated back to the caller Workflow. ## Errors in Nexus handlers In Temporal, a user-defined Nexus handler is primarily responsible for starting a Nexus Operation. Nexus handlers run in a Temporal Worker and use Temporal SDK builder functions like New-Sync-Operation or New-Workflow-Run-Operation to start an Operation. Nexus handlers may return [different error types](/references/failures#nexus-errors). Nexus Operations can end up in [completed](/references/events#nexusoperationcompleted), [failed](/references/events#nexusoperationfailed), [canceled](/references/events#nexusoperationcanceled), and [timed out](/references/events#nexusoperationtimedout) states. The Nexus Machinery breaks up the [Nexus Operation lifecycle](/nexus/operations#operation-lifecycle) into one or more [Nexus Tasks](/tasks#nexus-task) that a Nexus handler is responsible for processing. It creates a Nexus Task to start an Operation and may create additional Nexus Tasks, for example to cancel a long-running [asynchronous Operation](/nexus/operations#asynchronous-operation-lifecycle). By default, Nexus handler errors are considered retryable, unless specified below: - [Application Failures](/references/failures#nexus-errors) marked as non-retryable. - [Unsuccessful Operation errors](/references/failures#nexus-errors) that can resolve an operation as either failed or canceled. - [Non-retryable Nexus errors](/references/failures#non-retryable-nexus-errors). For example, if an unknown error is returned from a Nexus handler it will be classified as a retryable error. When an error is received by the caller's Nexus Machinery: - If a [non-retryable error](/references/failures#non-retryable-nexus-errors) is returned, the caller Workflow will have a [NexusOperationFailed](/references/events#nexusoperationfailed) event added to its Workflow History. - If a [retryable error](/references/failures#retryable-nexus-errors) is returned, the Nexus Machinery will automatically retry the [Nexus Task](/tasks#nexus-task), as discussed in [automatic retries](/nexus/operations#automatic-retries). These errors are visible to the caller Workflow as part of integrated execution debugging in [Pending Operations](/nexus/execution-debugging/#pending-operations). :::tip To avoid infinite [automatic retries](/nexus/operations#automatic-retries) and improve semantics, custom Nexus handlers should return a [specific Nexus error type](/references/failures#nexus-errors). See [errors in Nexus Operations](/references/failures#errors-in-nexus-operations) for additional details. ::: ## Nexus error handling in caller Workflows A Nexus Operation Failure is delivered to the Workflow Execution when a Nexus Operation fails. It contains information about the failure and the Nexus Operation Execution; for example, the Nexus Operation name and Nexus Operation token. The reason for the failure is in the message and in the underlying cause is typically an Application Error or a Canceled Error. :::tip RESOURCES - [Errors in Nexus Operations](/references/failures#errors-in-nexus-operations) - [Nexus Errors](/references/failures#nexus-errors) - [Nexus Operation Failures](/references/failures#nexus-operation-failure) ::: --- ## Execution Debugging - Temporal Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability) for [Temporal Cloud](/cloud/nexus) and [self-hosted deployments](/production-deployment/self-hosted-guide/nexus). ::: Nexus supports end-to-end execution debugging that may span: - Caller Workflows - One or more Nexus Operations executed within and across Namespaces - Underlying Temporal primitives, like a Workflow, created by a Nexus Operation handler This includes [multi-level Nexus calls](/nexus#multi-level-calls), for example: - Workflow A → Nexus Operation 1 → Workflow B → Nexus Operation 2 → Workflow C ## Bi-directional linking Bi-directional links enable navigation of an end-to-end execution across Namespace boundaries in the Temporal UI. They are automatically added by the Temporal SDK for builder functions like New-Workflow-Run-Operation. Links are auto-wired from a specific Nexus Operation event in the caller's Workflow history to a specific event in the handler's Workflow history. Bi-directional links enable navigating across Namespace boundaries: - Forward through the Nexus Operation execution: - From a Nexus Operation event in the caller's Workflow history. - To the underlying event in the handler's Workflow. - Backwards through the Nexus Operation execution: - From the underlying event in the handler's Workflow. - To a Nexus Operation event in the caller's Workflow history. ## Pending Operations Similar to pending Activities, pending Nexus Operations are displayed in the Workflow details page and using: `temporal workflow describe`. For example, from the Temporal UI: For example, from the `temporal` CLI: ``` temporal workflow describe Pending Nexus Operations: 1 Endpoint myendpoint Service my-hello-service Operation echo OperationToken State BackingOff Attempt 6 ScheduleToCloseTimeout 0s NextAttemptScheduleTime 20 seconds from now LastAttemptCompleteTime 11 seconds ago LastAttemptFailure {"message":"handler error (INTERNAL): internal error","applicationFailureInfo":{}} ``` Retryable Nexus errors [returned from a Nexus handler](/nexus/error-handling#errors-in-nexus-handlers) will surface as part of the Pending Operation in a caller Workflow. Non-retryable errors will result in the Nexus Operation reaching a final state in the caller Workflow, with a [Failed](/references/events#nexusoperationfailed), [TimedOut](/references/events#nexusoperationtimedout) or [Canceled](/references/events#nexusoperationcanceled) event. ## Pending Callbacks Nexus callbacks are sent from the handler's Namespace to the caller's Namespace to complete an asynchronous Nexus Operation. These show up in the UI and using: `temporal workflow describe`. For example, from the Temporal UI: For example, from the `temporal` CLI: ``` temporal workflow describe Callbacks: 1 URL https://nexus.phil-caller-Namespace.a2dd6.cluster.tmprl.cloud:7243/Namespaces/phil-caller-Namespace.a2dd6/nexus/callback Trigger WorkflowClosed State Succeeded Attempt 1 RegistrationTime 32 minutes ago ``` ## Tracing Temporal integrates with tracing libraries like [OpenTelemetry](https://opentelemetry.io/) and [OpenTracing](https://opentracing.io/). Tracing allows you to visualize the call graph of a Workflow, including its Activities, Nexus Operations, and Child Workflows. You can enable tracing by installing an interceptor on the Temporal Client or Worker in the supported SDKs. See the samples linked below to enable tracing for the SDKs: - [Go SDK](https://github.com/temporalio/samples-go/tree/main/opentelemetry) - [Java SDK](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/tracing) --- ## Nexus Metrics :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability) for [Temporal Cloud](/cloud/nexus) and [self-hosted deployments](/production-deployment/self-hosted-guide/nexus). ::: Nexus provides SDK metrics, Cloud metrics, and OSS Cluster metrics in addition to integrated [execution debugging](/nexus/execution-debugging). ## SDK Metrics [SDK metrics](/references/sdk-metrics) are emitted from a Nexus Worker, including: - [nexus_poll_no_task](/references/sdk-metrics#nexus_poll_no_task) - [nexus_task_schedule_to_start_latency](/references/sdk-metrics#nexus_task_schedule_to_start_latency) - [nexus_task_execution_failed Worker](/references/sdk-metrics#nexus_task_execution_failed) - [nexus_task_execution_latency](/references/sdk-metrics#nexus_task_execution_latency) - [nexus_task_endtoend_latency](/references/sdk-metrics#nexus_task_endtoend_latency) ## Cloud Metrics [Cloud metrics](/cloud/metrics/reference) are emitted by Temporal Cloud, including: - Caller Namespace - RespondWorkflowTaskCompleted \- schedule a Nexus Operation. - Handler Namespace - PollNexusTaskQueue \- get a [Nexus Task](/tasks#nexus-task) to process, for example to start a Nexus Operation. - RespondNexusTaskCompleted \- report the Nexus Task was successful. - RespondNexusTaskFailed \- report the Nexus Task failed. ## OSS Cluster Metrics [Cluster metrics](/references/cluster-metrics#nexus-metrics) are emitted from an OSS Cluster, including: - History Service metrics - Concurrency Limiter metrics - Frontend Service metrics --- ## Nexus Operations :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability) for [Temporal Cloud](/cloud/nexus) and [self-hosted deployments](/production-deployment/self-hosted-guide/nexus). ::: [Nexus Operations](/glossary#nexus-operation) are arbitrary-duration operations that may be synchronous or asynchronous, short-lived or long-lived, and are used to connect Temporal Applications within and across Namespaces, clusters, regions, and clouds. Unlike a traditional RPC, an asynchronous Nexus Operation has an operation token that can be used to re-attach to a long-lived Nexus Operation, for example, one backed by a Temporal Workflow. Nexus Operations support a uniform interface to get the status of an operation or its result, receive a completion callback, or cancel the operation – all of which are fully integrated into the Temporal Platform. ## SDK support The Temporal SDK provides an integrated Temporal experience to build, run, and use Nexus Operations. :::tip RESOURCES - [Go SDK - Nexus quick start and code sample](/develop/go/nexus) - [Java SDK - Nexus quick start and code sample](/develop/java/nexus) ::: Caller Workflows use the Temporal SDK to execute Nexus Operations. Nexus Operation handlers are created with Temporal SDK builder functions such as: - New-Workflow-Run-Operation - Start a Workflow as an asynchronous Operation. - New-Sync-Operation - Invoke an underlying Query or Signal as a synchronous Operation. - Invoke an Update as a synchronous Operation. - Execute arbitrary code as a synchronous Operation. ## Nexus Operation lifecycle {#operation-lifecycle} When you execute a Nexus Operation from a caller Workflow using the Temporal SDK, a command is sent to Temporal to schedule the Nexus Operation which is atomically handed off to the Nexus Machinery. The [Nexus Machinery](/glossary#nexus-machinery) uses state-machine-based invocation and completion callbacks to ensure [at-least-once](#at-least-once-execution-semantics-and-idempotency) execution of a Nexus Operation and reliable delivery of the result. The Nexus Machinery is responsible for making the Nexus RPC calls on behalf of the caller Workflow, with [automatic-retries](#automatic-retries). This means you don't have to use Nexus RPC directly, only the Temporal SDK along with the Temporal Service. ### Synchronous Operation lifecycle Nexus supports synchronous Operations that take less than 10 seconds to execute, as measured from the caller's Nexus Machinery. The lifecycle for a synchronous Nexus Operation, for example to Signal or Query a Workflow: 1. Caller Workflow executes a Nexus Operation using the Temporal SDK. 1. Caller Worker issues a [ScheduleNexusOperation](/references/commands#schedulenexusoperation) command to its Namespace gRPC endpoint. 1. Caller Namespace records a [NexusOperationScheduled](/references/events#nexusoperationscheduled) event in the caller's Workflow History. 1. Caller Nexus Machinery makes a Nexus call to start a Nexus Operation. 1. Handler Nexus Machinery receives the Nexus request and sync matches to a handler Worker. 1. Handler Worker gets a [Nexus Task](/tasks#nexus-task) to start a Nexus Operation, by polling the Nexus Endpoint's target Task Queue. 1. Handler processes the Nexus Task, using the Temporal SDK **New-Sync-Operation**. 1. Handler responds to its Namespace gRPC endpoint with the Operation result. 1. Caller Namespace records the result in the caller's Workflow history as a Nexus event, for example [Completed](/references/events#nexusoperationcompleted) or [Failed](/references/events#nexusoperationfailed). 1. Caller Worker polls for a Workflow Task on its Workflow Task Queue. 1. Caller Workflow gets the Operation result, using the Temporal SDK. :::tip Stay within the remaining request deadline budget to avoid being timed out. If a Nexus handler times out, the Operation will be retried by the caller's Nexus Machinery until the Operation's Schedule-to-Close timeout has been exceeded. ::: ### Asynchronous Operation lifecycle {#asynchronous-operation-lifecycle} An asynchronous Nexus Operation may take up to 60 days to complete in Temporal Cloud, which is the maximum Schedule-to-Close-Timeout. The lifecycle of an asynchronous Nexus Operation, with differences between the synchronous lifecycle in bold: 1. Caller Workflow executes a Nexus Operation using the Temporal SDK. 1. Caller Worker issues a [ScheduleNexusOperation](/references/commands#schedulenexusoperation) command to its Namespace gRPC endpoint. 1. Caller Namespace records a [NexusOperationScheduled](/references/events#nexusoperationscheduled) event in the caller Workflow's History. 1. Caller Nexus Machinery makes a Nexus RPC to start a Nexus Operation. 1. Handler Nexus Machinery receives the Nexus request and sync matches to a handler Worker. 1. Handler Worker gets a [Nexus Task](/tasks#nexus-task) to start a Nexus Operation, by polling the Nexus Endpoint's target Task Queue. 1. Handler processes the Nexus Task, using the Temporal SDK **New-Workflow-Run-Operation**. 1. Handler responds to its Namespace gRPC endpoint with the **start Operation response**. 1. Caller Namespace records the response in the caller's Workflow history as a Nexus event, for example **[NexusOperationStarted](/references/events#nexusoperationstarted)**. 1. **Handler Workflow completes and the [Nexus Completion Callback](/glossary#nexus-async-completion-callback) is delivered to the caller's Nexus Machinery.** 1. Caller Namespace records the result in the caller's Workflow history as a Nexus event, for example [Completed](/references/events#nexusoperationcompleted) or [Failed](/references/events#nexusoperationfailed). 1. Caller Worker polls for a Workflow Task on its Workflow Task Queue. 1. Caller Workflow gets the Operation result, using the Temporal SDK. ### Executing arbitrary code from a synchronous Nexus Operation handler {#executing-arbitrary-code-from-a-sync-handler} Synchronous Nexus Operation handlers can execute arbitrary code, but unlike Activities they should be short-lived. As mentioned above, a synchronous Nexus Operation handler has less than 10 seconds to process a Nexus start Operation request and should stay within the remaining request deadline budget for the Nexus request. For example, this may be done by looking at the Request-Timeout header or hooking into cancelation that is triggered when the timeout is exceeded. ### System interactions Temporal Nexus Operations are requested and processed using the Temporal queue-based Worker architecture. Workers interact with their Namespace gRPC endpoint as before. Nexus Machinery on both sides handles the cross-namepace communication. For example, when you execute a Nexus Operation in a caller Workflow the following Namespace gRPC calls are made: 1. **RespondWorkflowTaskCompleted** ([ScheduleNexusOperation command](/references/commands#schedulenexusoperation)) is used by the caller Worker to schedule a Nexus Operation, which atomically hands off execution to the caller's Nexus Machinery. 1. **PollNexusTaskQueue** is used by the handler Worker to receive a [Nexus Task](/tasks#nexus-task) to process, for example to start a Nexus Operation. 1. **RespondNexusTaskCompleted** or **RespondNexusTaskFailed** is used by the handler Worker to return the Nexus Task result. When asked to start a Nexus Operation, the Nexus handler decides if the Operation will be synchronous or asynchronous. 1. This is typically a static decision based on the [Temporal SDK builder function used](#sdk-support). 1. [Asynchronous Nexus Operations](#asynchronous-operation-lifecycle), created with the New-Workflow-Run-Operation SDK helper, will return a Nexus Operation token, that can be used to perform additional actions like canceling an Operation. 1. [Synchronous Nexus Operations](#synchronous-operation-lifecycle), created with the New-Sync-Operation SDK helper, will return the Operation result directly. 1. The caller's Nexus Machinery receives the result and records a NexusOperation event in the caller's Workflow History. 1. **PollWorkflowTaskQueue** is used by the caller Worker to receive a Workflow Task with the Nexus Operation event which may be [Started](/references/events#nexusoperationstarted), [Completed](/references/events#nexusoperationcompleted), [Failed](/references/events#nexusoperationfailed), [Canceled](/references/events#nexusoperationcanceled), or [TimedOut](/references/events#nexusoperationtimedout). ## Automatic retries {#automatic-retries} Once the caller Workflow schedules an Operation with the caller's Temporal Service, the caller's Nexus Machinery keeps trying to start the Operation. If a [retryable Nexus error](/references/failures#nexus-errors) is returned the Nexus Machinery will retry until the Nexus Operation's Start-to-Close-Timeout is exceeded. For example, if a Nexus handler returns a [retryable error](/references/failures#nexus-errors), or an [upstream timeout](https://github.com/nexus-rpc/api/blob/main/SPEC.md#predefined-handler-errors) is encountered by the caller, the Nexus request will be retried up to the [default Retry Policy's](https://github.com/temporalio/temporal/blob/de7c8879e103be666a7b067cc1b247f0ac63c25c/components/nexusoperations/config.go#L111) max attempts and expiration interval. :::note This differs from how Activities and Workflows handle errors and retries: - [Errors in Activities](/references/failures#errors-in-activities) - [Non-retryable errors](/references/failures#non-retryable) ::: See [errors in Nexus handlers](/nexus/error-handling#errors-in-nexus-handlers) to control the retry behavior by returning a [non-retryable Nexus error](/references/failures#non-retryable-nexus-errors). ## Circuit breaking {#circuit-breaking} Circuit breaking handles deployment issues, such as remote service faults, that take time to recover. The circuit-breaker pattern improves application stability and resilience by detecting repeated errors and enforcing a "timeout" to allow external resources to recover. Nexus implements the circuit-breaker pattern for each pair of caller Namespaces and target Nexus Endpoints. This pair is called a "destination pair" and consists of one Namespace and one Endpoint. The circuit breaker for each pair is unique to those two elements. It will trip and reset independently from all other Nexus destination pairs. Here's an example of how the circuit breaker functionality works. Say that all Nexus Workers associated with a Nexus Endpoint are unavailable for some reasons. Five consecutive requests fail due to request timeouts. By default, the circuit breaker activates after five consecutive Nexus requests fail due to [retryable errors](/references/failures#nexus-errors). After a circuit breaker trips, it enters the _open_ state. In this state, the caller's Nexus Machinery fails early and stops sending requests to the target Nexus Endpoint. After 60 seconds in the _open_ state, the circuit breaker transitions to the _half-open_ state, allowing a single trailblazing request from the Client. If the request is successful, the circuit breaker returns to the _closed_ state, its default operational state. Once the circuit breaker closes, all requests are allowed through. If it fails, the circuit breaker returns to the _half-open_ state for another 60 seconds. The circuit breaker state is surfaced in a caller Workflow's [Pending Nexus Operations](/nexus/execution-debugging#pending-operations) and in the handler's Workflow [Pending Nexus Callbacks](/nexus/execution-debugging#pending-callbacks). You can check the circuit breaker state using the UI, the Temporal CLI, or the `DescribeWorkflowExecution` API. When the circuit breaker for a destination pair is tripped (i.e., the circuit breaker is _open_), the [Pending Nexus Operation](/nexus/execution-debugging#pending-operations) for a [Nexus Operation Scheduled](/references/events#nexusoperationscheduled) event surfaces a State of Blocked along with a BlockedReason. Here's how that looks in the Web UI: In the preceding screen shot, the open circuit breaker has made 1 attempt. For a given destination pair, differing Nexus Operations may contribute to tripping the circuit breaker count. When the circuit breaker is open, a given Nexus Operation may have no attempts or fewer than 5 attempts. To check from the command line, issue: ```sh temporal workflow describe -w my-workflow-id ``` Here's how that looks: ```sh Execution Info: WorkflowId my-workflow-id ... Pending Activities: 0 Pending Child Workflows: 0 Pending Nexus Operations: 1 Endpoint my-nexus-endpoint Service nexus-playground Operation sync-op-ok OperationToken State Blocked Attempt 1 ScheduleToCloseTimeout 1d 0h 0m 0s LastAttemptCompleteTime 56 seconds ago LastAttemptFailure {"message":"handler error (UPSTREAM_TIMEOUT): upstream timeout","cause":{"message":"upstream timeout","applicationFailureInfo":{"type":"NexusFailure"}},"applicationFailureInfo":{"type":"NexusHandlerError"}} BlockedReason The circuit breaker is open. ``` Here's what a [Nexus Operation Cancel Request](/references/events#nexusoperationcancelrequested) surfaces for a CancelationState of Blocked and a CancelationBlockedReason: ```sh Execution Info: WorkflowId my-workflow-id ... Pending Activities: 0 Pending Child Workflows: 0 Pending Nexus Operations: 1 Endpoint my-nexus-endpoint Service nexus-playground Operation async-op-workflow-wait-for-cancel OperationToken eyJ2IjowLCJ0IjoxLCJucyI6Im5zIiwid2lkIjoidyJ State Started Attempt 1 ScheduleToCloseTimeout 1d 0h 0m 0s LastAttemptCompleteTime 51 seconds ago CancelationState Blocked CancelationAttempt 5 CancelationRequestedTime 37 seconds ago CancelationLastAttemptCompleteTime 27 seconds ago CancelationLastAttemptFailure {"message":"handler error (UPSTREAM_TIMEOUT): upstream timeout","cause":{"message":"upstream timeout","applicationFailureInfo":{"type":"NexusFailure"}},"applicationFailureInfo":{"type":"NexusHandlerError"}} CancelationBlockedReason The circuit breaker is open. ``` ## Execution semantics {#execution-semantics} ### At-least-once execution semantics and idempotency The Nexus Machinery provides reliable execution with at-least-once execution semantics for a Nexus Operation, until the caller's Schedule-to-Close-Timeout is exceeded, at which time the overall Nexus Operation times out. The Nexus Machinery breaks up the [Nexus Operation lifecycle](/nexus/operations#operation-lifecycle) into one or more [Nexus Tasks](/tasks#nexus-task) that a Nexus handler is responsible for processing. If a Nexus handler times out or returns a non-retryable Nexus error, then the Nexus Machinery will retry the Nexus request to provide at-least-once execution. This means it's possible for your Nexus handler to be invoked multiple times for a given Nexus Operation. To deal with at-least-once execution, the Nexus Operation handler should be idempotent, like Activities should be idempotent. It's not required in all cases, but highly recommended in general. ### Exactly-once execution semantics through an underlying WorkflowIDReusePolicy To deduplicate work and get exactly-once execution semantics, a Nexus Operation can start a Workflow with a WorkflowIDReusePolicy of RejectDuplicates which only allows one Workflow Execution per Workflow ID within a Namespace for the Retention Period. ## Cancelation The request to cancel a caller Workflow is automatically propagated to all pending Nexus Operations, and in turn any underlying handler Workflows. If an underlying handler Workflow is canceled, the Nexus Operation will report a [Canceled Failure](/references/failures#cancelled-failure) to the caller's Workflow Execution. ## Termination If the caller Workflow is Terminated, all pending Nexus Operations are abandoned. If possible, consider [cancellation](#cancelation) instead. ## Versioning {#versioning} Task Routing is the simplest way to version your service code. If you have a new backward-incompatible Nexus Operation Handler, for example due to a wire-level incompatible breaking change, start by using a different Service and Task Queue. The version may be part of the service name, for example `prod.payments.v2`. Callers can then migrate to the new version in their normal deployment schedule. ## Attaching multiple Nexus callers to a handler Workflow {#attaching-multiple-nexus-callers} :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Using a [Conflict-Policy of Use-Existing](/workflow-execution/workflowid-runid#workflow-id-conflict-policy) with the [New-Workflow-Run-Operation](/nexus/operations#sdk-support) SDK helper is currently a [Public Preview](/evaluate/development-production-features/release-stages#public-preview) feature. ::: :::tip CLOUD LIMITS In [Cloud](https://docs.temporal.io/cloud/limits#per-workflow-callback-limits), this is limited to 32, and in self-hosted Temporal, it can be configured with the [Callback limit](/workflow-execution/limits#workflow-execution-callback-limits) ::: Nexus Operations that start a Workflow with the [New-Workflow-Run-Operation](/nexus/operations#sdk-support) SDK helper will automatically attach a completion Callback on the handler Workflow, so the Nexus caller receives the result. Additional Nexus callers may attach to the same handler Workflow if the Nexus handler uses a [Conflict-Policy of Use-Existing](/workflow-execution/workflowid-runid#workflow-id-conflict-policy). A single handler Workflow Execution has a [Callback limit](/workflow-execution/limits#workflow-execution-callback-limits) that governs how many Nexus callers can be attached. Nexus callers that exceed the limit will receive an error. When a handler Workflow uses [Continue-As-New](/workflow-execution/continue-as-new) the existing Nexus completion Callbacks will be copied to the new Workflow Execution and the previous Workflow Execution's completion Callbacks will be left in the Standby state indefinitely. --- ## Nexus Registry :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability) for [Temporal Cloud](/cloud/nexus) and [self-hosted deployments](/production-deployment/self-hosted-guide/nexus). ::: The [Nexus Registry](/glossary#nexus-registry) is used to view and manage [Nexus Endpoints](/nexus/endpoints). Adding a Nexus Endpoint to the Nexus Registry deploys the Endpoint, so it is available at runtime to serve Nexus requests. :::info The Nexus Registry is scoped to an Account in Temporal Cloud and scoped to a Cluster for self-hosted deployments. ::: Developers can advertise available Endpoints and Services, so others can find them for use in their caller Workflows. Endpoint names must be unique within the Nexus Registry. ## View and Manage Nexus Endpoints Nexus Endpoints may be managed using the Temporal UI, CLI, Cloud Terraform provider, or [Cloud Ops API](/ops). :::tip RESOURCES - [Terraform support](/cloud/terraform-provider#manage-temporal-cloud-nexus-endpoints-with-terraform) for Temporal Cloud. - [tcld nexus](/cloud/tcld/nexus) for Temporal Cloud. - [temporal operator nexus](/cli/operator#nexus) for self-hosted deployments. ::: ### Search for a Nexus Endpoint You can search the Nexus Registry for Endpoint name or an Endpoint's target Namespace to quickly find an Endpoint of interest. The Endpoint details page shows the target Namespace and target Task Queue along with the endpoint description rendered as markdown. ### Create a Nexus Endpoint Creating a Nexus Endpoint includes setting an Access Policy of caller Namespaces that can access the Endpoint. Even the target Namespace must be added to the Access Policy to access the Endpoint. Temporal Cloud also provides built-in Endpoint access control in the form of a caller Namespace allowlist, which must be set for any caller to access a Nexus Endpoint, even if in the same Namespace. ### Edit a Nexus Endpoint Editing a Nexus Endpoint allows changing everything but the Endpoint Name. The target Namespace and target Task Queue can be updated without interrupting existing in-flight Nexus Operations that are already being processed, and new Nexus Operations will be routed to the updated target Namespace and target Task Queue. ### Configure runtime access controls Configure the Endpoint's Access Policy to control which callers can access a Nexus Endpoint at runtime. The Access Policy allows specifying the caller Namespaces that can use Nexus Services on an Endpoint at runtime. No callers are allowed by default, even if the caller is in the same Namespace as the Endpoint target. See [Runtime Access Controls](/nexus/security#runtime-access-controls) for more information. ## Roles and permissions :::info Temporal Cloud has built-in Nexus security. For self-hosted deployments you can implement [custom Authorizers](/self-hosted-guide/security#authorizer-plugin). ::: In Temporal Cloud, access to the Nexus Registry is controlled with the following roles and permissions: - View, list, and search Nexus Endpoints: - Read-only role (or higher) in an Account - Manage a Nexus Endpoint (create, edit, delete) requires **both**: - Developer role (or higher) in an Account - Namespace Admin permission on the Endpoint's target Namespace See [Nexus Security in Temporal Cloud](/cloud/nexus/security) for more information. ## Terraform and Ops API support Nexus Endpoint provisioning and lifecycle management may be automated with Terraform or the Ops API. :::tip RESOURCES - [Terraform support](/cloud/terraform-provider#manage-temporal-cloud-nexus-endpoints-with-terraform) for Temporal Cloud. - [Cloud Ops API](/ops) for Temporal Cloud. - [Operator API](https://github.com/temporalio/api/blob/master/temporal/api/operatorservice/v1/service.proto) for self-hosted deployments. ::: --- ## Security in Temporal Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability) for [Temporal Cloud](/cloud/nexus) and [self-hosted deployments](/production-deployment/self-hosted-guide/nexus). ::: Nexus supports restricting access to Nexus Endpoints. Temporal Cloud has built-in Endpoint access controls and provides secure Nexus connectivity across Namespaces. For self-hosted deployments you can implement custom Authorizers. ## Runtime access controls {#runtime-access-controls} In Temporal Cloud, access to a Nexus Endpoint at runtime is controlled by the Endpoint's access control policy (allowlist of caller Namespaces) for each Endpoint in the Nexus Registry. Workers in each Namespace may authenticate with Temporal Cloud as they do now with mTLS certificates or API key as allowed by the caller Namespace. Once a Worker has authenticated it can send Nexus Operation commands to Temporal Cloud using a Temporal SDK to start a Nexus Operation in a different Namespace. For example, in the Temporal Go SDK a caller Workflow would use `nexusClient.ExecuteOperation` to issue a command to start a Nexus Operation. To process a `ScheduleNexusOperation` command from a caller Workflow, Temporal Cloud obtains the handler Namespace and Task Queue for the handler endpoint, and restricts access by verifying that the caller's Namespace is in the endpoint's allowlist. In this way, Temporal Cloud acts as a trusted broker across Namespace boundaries, and relies on authenticated workers in each Namespace. See [Configure Runtime Access Controls](/nexus/registry#configure-runtime-access-controls) for additional information. ## Secure connectivity {#secure-connectivity} :::info Temporal Cloud has built-in security connectivity across all Namespaces in an Account. Self-hosted deployments rely on the Temporal Cluster being secure. ::: In Temporal Cloud multiple security provisions are in place to ensure it can act as a trusted broker across Namespace boundaries: - Workers authenticate to their Namespaces via mTLS or an API key as allowed by their Namespace configuration. - mTLS is used for all Nexus communication, including across cloud cells and regions, to: - Start or Cancel a Nexus Operation. - Callback on completion of an asynchronous Nexus Operation. - Nexus Endpoints are only privately accessible from within a Temporal Cloud account. - Accessible from within a caller Workflow using the Temporal SDK. - Not externally accessible for arbitrary clients yet. ## Payload encryption and Data Converter {#payload-encryption-data-converter} The Data Converter works the same for a Nexus Operation as it does for other payloads sent between a Worker and Temporal Cloud. The caller and handler Workers must have compatible Data Converters as operation inputs and results are passed between the two. If encryption keys are used to encrypt payloads, they must be available in both the caller and handler. For example, the caller and handler can use a shared symmetric key stored in your KMS. Please let us know if you need per-Service payload encryption or better handling for asymmetric encryption keys. ## Nexus Registry security {#managing-nexus-endpoints} See [Nexus Registry Roles and Permissions](/nexus/registry#roles-and-permissions). ## Learn more - [Evaluate](/evaluate/nexus) why you should use Nexus and watch the [Nexus keynote and demo](https://youtu.be/qqc2vsv1mrU?feature=shared&t=2082). - Learn how Nexus works in the [Nexus deep dive talk](https://www.youtube.com/watch?v=izR9dQ_eIe4&t=934s). - Explore [additional resources](/evaluate/nexus#learn-more) to learn more about Nexus. --- ## Nexus Services :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability) for [Temporal Cloud](/cloud/nexus) and [self-hosted deployments](/production-deployment/self-hosted-guide/nexus). ::: [Nexus Services](/glossary#nexus-service) are named collections of arbitrary-duration [Nexus Operations](/nexus/operations) that provide a contract suitable for sharing across team boundaries. A [Nexus Endpoint](/nexus/endpoints) exposes Nexus Services for others to use. Services are handled in a Temporal Worker that is polling an Endpoint's target Namespace and Task Queue. Multiple Nexus Services may be run in the same Worker polling the same Endpoint target Namespace and Task Queue. For example, a Nexus Service is often registered in the same Worker as the underlying Workflows they abstract: ```go func main() { c, err := client.Dial(client.Options{}) if err != nil { log.Fatalln("Unable to create client", err) } defer c.Close() w := worker.New(c, taskQueue, worker.Options{}) service := nexus.NewService(service.HelloServiceName) err = service.Register(handler.EchoOperation, handler.HelloOperation) if err != nil { log.Fatalln("Unable to register operations", err) } w.RegisterNexusService(service) w.RegisterWorkflow(handler.HelloHandlerWorkflow) err = w.Run(worker.InterruptCh()) if err != nil { log.Fatalln("Unable to start worker", err) } } ``` The Nexus Service name is used when invoking a Nexus Operation from a caller workflow. :::tip RESOURCES - [Go SDK - build and use Nexus Services](/develop/go/nexus#develop-nexus-service-operation-handlers). - [Java SDK - build and use Nexus Services](/develop/java/nexus#develop-nexus-service-operation-handlers). ::: --- ## Common Use Cases for Temporal Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability) for [Temporal Cloud](/cloud/nexus) and [self-hosted deployments](/production-deployment/self-hosted-guide/nexus). ::: [Temporal Nexus](/evaluate/nexus) enables the following use cases: - **Cross-team, cross-domain, and cross-namespace** \- connect Temporal Applications within and across Namespaces. - **Share a subset of a Temporal Application** \- abstract and share a subset of an Temporal Application as a Nexus Service. - **Modular design for growth** \- modular application design that can evolve as you grow. - **Smaller failure domains** \- each team to have their own Namespace for improved security, troubleshooting, and fault isolation. - **Multi-region** \- Nexus requests in Temporal Cloud are routed across a global mTLS-secured Envoy mesh. :::tip RELATED - [Evaluate](/evaluate/nexus) why you should use Nexus and learn more about [Nexus use cases](/evaluate/nexus#use-cases). ::: ## Share Workflows Across Namespaces Nexus is purpose-built to connect Temporal Applications within and across Namespaces. It addresses the limitations of Child Workflows, Activity Wrappers, and bespoke APIs that target a remote Namespace. Nexus has a streamlined Temporal developer experience, reliable execution, and integrated observability. Without Nexus, when a caller Workflow invokes another Workflow directly the caller must know: - target Workflow's Namespace and Task Queue. - target Workflow Retry Policy and Timeouts. - target Workflow options including [Workflow-Id-Reuse-Policy](/workflow-execution/workflowid-runid#workflow-id-reuse-policy) and [Workflow-Id-Conflict-Policy](/workflow-execution/workflowid-runid#workflow-id-conflict-policy). - target Workflow ID uniqueness constraints, so it doesn't conflict with other Workflow types in the handler Namespace. This creates a high degree of coupling between the caller and handler, by exposing internal implementation details to the caller. This adds friction for the caller, who shouldn't need to know this level of detail. It's more difficult to refactor or migrate handler Workflows to different Namespace or Task Queue. In short, Workflow to Workflow is a leaky abstraction. Nexus addresses this by providing a cleaner service contract between the caller and handler. Nexus is suitable for abstracting and sharing Workflows across team, domain, and Namespace boundaries. Nexus requests in Temporal Cloud are routed across a global mTLS-secured Envoy mesh, so they're also suitable for multi-region use cases. Enable calls across Namespaces by: 1. Creating a [Nexus Endpoint](/nexus/endpoints) in the [Nexus Registry](/nexus/registry) that: 1. Targets the handler Namespace. 2. Allows the caller Namespace. 2. Creating a [Nexus Service](/nexus/services) in a Worker within a handler Namespace. 1. Abstract Workflows with Nexus Operations [using Temporal SDK builder functions for Nexus Operations](/nexus/operations#sdk-support). 2. Register the Nexus Service with the Worker. 3. Ensure the Worker is polling the Endpoint's Task Queue. 3. Calling the Nexus Service from a Workflow in a different Namespace. 1. Execute a Nexus Operation from a caller Workflow [using the Temporal SDK](/nexus/operations#sdk-support). --- ## Temporal Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability) for [Temporal Cloud](/cloud/nexus) and [self-hosted deployments](/production-deployment/self-hosted-guide/nexus). ::: [Temporal Nexus](/evaluate/nexus) allows you to reliably connect Temporal Applications. It promotes a more modular architecture for sharing a subset of your team's capabilities with well-defined microservice contracts for other teams to use. Nexus was designed with Durable Execution in mind and enables each team to have their own Namespace for improved modularity, security, debugging, and fault isolation. ## Connect Temporal Applications [Nexus Services](/nexus/services) are exposed from a [Nexus Endpoint](/nexus/endpoints) created in the [Nexus Registry](/nexus/registry). Adding a Nexus Endpoint to the Nexus Registry deploys the Endpoint, so it is available at runtime to serve Nexus requests. The Nexus Registry is scoped to an Account in Temporal Cloud and scoped to a Cluster for self-hosted deployments. A Nexus Endpoint is a reverse proxy that decouples the caller from the handler and routes requests to upstream targets. It currently supports routing to a single target Namespace and Task Queue. Nexus Services and [Nexus Operations](/nexus/operations) are often registered in the same Worker as the underlying Temporal primitives they abstract. Nexus connectivity across Namespaces is provided by the Temporal Service. Temporal Cloud supports Nexus connectivity [within and across regions](/cloud/nexus#multi-region-connectivity) and has [built-in access controls](/cloud/nexus#built-in-access-controls). Self-hosted Nexus connectivity is supported within a single Cluster and [custom Authorizers](/self-hosted-guide/security#authorizer-plugin) may be used for access control. ## Build and use Nexus Services Nexus has a familiar programming model to build and use Nexus Services using the Temporal SDK. The [Nexus Operation lifecycle](/nexus/operations#operation-lifecycle) supports both [synchronous](/nexus/operations#synchronous-operation-lifecycle) and [asynchronous](/nexus/operations#asynchronous-operation-lifecycle) Operations. It is suitable for low-latency and long-running use cases. Nexus Operations can be implemented with Temporal primitives, like Workflows, or [execute arbitrary code](/nexus/operations#executing-arbitrary-code-from-a-sync-handler). :::tip RESOURCES - [Go SDK - Nexus quick start and code sample](/develop/go/nexus) - [Java SDK - Nexus quick start and code sample](/develop/java/nexus) - [Python SDK - Nexus quick start and code sample](/develop/python/nexus) - [TypeScript SDK - Nexus quick start and code sample](/develop/typescript/nexus) - [.NET SDK - Nexus quick start and code sample](/develop/dotnet/nexus) ::: ## Queue-based Worker architecture Nexus uses the Temporal queue-based Worker architecture and built-in Nexus Machinery to ensure reliable execution of Nexus Operations. If a Nexus Service is down, a caller Workflow can continue to schedule Nexus Operations and they will be processed when the service is up. Nexus handler Workers poll the Endpoint's target Namespace and Task Queue for [Nexus Tasks](/tasks#nexus-task) to process. Workers authenticate to their Namespace's gRPC endpoint with supported methods including mTLS client certificates or API Keys in Temporal Cloud. See [system interactions](/nexus/operations/#system-interactions) for additional detail. ## Built-in Nexus Machinery The built-in [Nexus Machinery](/glossary#nexus-machinery) uses state-machine-based invocation and completion callbacks. It guarantees [at-least-once](/nexus/operations#execution-semantics) execution, with automatic retries, circuit breaking, rate limiting, and load balancing. The Nexus Machinery uses [Nexus RPC](/glossary#nexus-rpc) on the wire, a protocol designed with Durable Execution in mind, to support arbitrary-duration Operations that extend beyond a traditional RPC. For example, when you execute a Nexus Operation in a caller Workflow, a command is sent to Temporal to schedule the Operation, and the Nexus Machinery is responsible for making the Nexus RPC calls on your behalf. This means you don't have to use Nexus RPC directly, only the Temporal SDK along with the Temporal Service. ## Multi-level calls Nexus supports multi-level Nexus calls, for example: - Workflow A → Nexus Operation 1 → Workflow B → Nexus Operation 2 → Workflow C ## Public Preview features :::tip SUPPORT, STABILITY, and DEPENDENCY INFO The following Nexus features are currently in [Public Preview](/evaluate/development-production-features/release-stages#public-preview). ::: - [Attaching multiple Nexus callers to a handler Workflow](/nexus/operations#attaching-multiple-nexus-callers) --- ## What is a Temporal Retry Policy? A Retry Policy is a collection of settings that tells Temporal how and when to try again after something fails in a Workflow Execution or Activity Task Execution. ## Overview Temporal's default behavior is to automatically retry an Activity that fails, so transient or intermittent failures require no action on your part. This behavior is defined by the Retry Policy. A Retry Policy is declarative. You do not need to implement your own logic for handling the retries; you only need to specify the desired behavior and Temporal will provide it. In contrast to the Activities it contains, a Workflow Execution itself is not associated with a Retry Policy by default. This may seem counterintuitive, but Workflows and Activities perform different roles. Activities are intended for operations that may fail, so having a default Retry Policy increases the likelihood that they will ultimately complete successfully, even if the initial attempt failed. On the other hand, Workflows must be deterministic and are not intended to perform failure-prone operations. While it is possible to assign a Retry Policy to a Workflow Execution, this is not the default and it is uncommon to do so. Retry Policies do not apply to Workflow Task Executions, which retry until the Workflow Execution Timeout (which is unlimited by default) with an exponential backoff and a max interval of 10 minutes. A Retry Policy instructs the Temporal Service how to retry a failure of either a [Workflow Execution](/workflow-execution) or an [Activity Task Execution](/tasks#activity-task-execution). Try out the [Activity retry simulator](/develop/activity-retry-simulator) to visualize how a Retry Policy works. --- --- ## Default behavior Activities in Temporal are associated with a Retry Policy by default, while Workflows are not. The Temporal SDK provides a Retry Policy instance with default behavior. While this object is not specific to either a Workflow or Activity, you'll use different methods to apply it to the execution of each. This section details the default retry behavior for both Activities and Workflows to provide context for any further customization. ### Activity Execution Temporal's default behavior is to automatically retry an Activity, with a short delay between each attempt that increases exponentially, until it either succeeds or is canceled. When a subsequent request succeeds, your Workflow code will resume as if the failure never occurred. When an Activity Task Execution is retried, the Temporal Service places a new [Activity Task](/tasks#activity-task) into its respective [Activity Task Queue](/task-queue), which results in a new Activity Task Execution. The default Retry Policy uses exponential backoff with a 2.0 backoff coefficient, starting with a 1-second initial interval and capping at a maximum interval of 100 seconds. By default, the maximum attempt of retries are set to zero which is evaluated as unlimited and non-retryable errors default to none. For detailed information about all Retry Policy attributes and their default values, see the [Properties](#properties) section. ### Workflow Execution Unlike Activities, Workflow Executions do not retry by default. When a Workflow Execution is spawned, it is not associated with a default Retry Policy and thus does not retry by default. Temporal provides guidance around idempotence of Activity code with the expectation that Activities will need to re-execute upon failure; this is not typically true of Workflows. In most use cases, a Workflow failure would indicate an issue with the design or deployment of your application; for example, a permanent failure that may require different input data. Retrying an entire Workflow Execution is not recommended due to Temporal's deterministic design. Since Workflows replay the same sequence of events to reach the same state, retrying the whole workflow would repeat the same logic without resolving the underlying issue that caused the failure. This repetition does not address problems related to external dependencies or unchanged conditions and can lead to unnecessary resource consumption and higher costs. Instead, it's more efficient to retry only the failed Activities. This approach targets specific points of failure, allowing the workflow to progress without redundant operations, thereby saving on resources and ensuring a more focused and effective error recovery process. If you need to retry parts of your Workflow Definition, we recommend you implement this in your Workflow code. ## Custom Retry Policy To use a custom Retry Policy, provide it as an options parameter when starting a Workflow Execution or Activity Execution. Only certain scenarios merit starting a Workflow Execution with a custom Retry Policy, such as the following: - A [Temporal Cron Job](/cron-job) or some other stateless, always-running Workflow Execution that can benefit from retries. - A file-processing or media-encoding Workflow Execution that downloads files to a host. ## Properties ### Default values for Retry Policy ``` Initial Interval = 1 second Backoff Coefficient = 2.0 Maximum Interval = 100 × Initial Interval Maximum Attempts = ∞ Non-Retryable Errors = [] ``` ### Initial Interval - **Description:** Amount of time that must elapse before the first retry occurs. - **The default value is 1 second.** - **Use case:** This is used as the base interval time for the [Backoff Coefficient](#backoff-coefficient) to multiply against. ### Backoff Coefficient - **Description:** The value dictates how much the _retry interval_ increases. - **The default value is 2.0.** - A backoff coefficient of 1.0 means that the retry interval always equals the [Initial Interval](#initial-interval). - **Use case:** Use this attribute to increase the interval between retries. By having a backoff coefficient greater than 1.0, the first few retries happen relatively quickly to overcome intermittent failures, but subsequent retries happen farther and farther apart to account for longer outages. Use the [Maximum Interval](#maximum-interval) attribute to prevent the coefficient from increasing the retry interval too much. ### Maximum Interval - **Description:** Specifies the maximum interval between retries. - **The default value is 100 times the [Initial Interval](#initial-interval).** - **Use case:** This attribute is useful for [Backoff Coefficients](#backoff-coefficient) that are greater than 1.0 because it prevents the retry interval from growing infinitely. ### Maximum Attempts - **Description:** Specifies the maximum number of execution attempts that can be made in the presence of failures. - **The default is unlimited.** - If this limit is exceeded, the execution fails without retrying again. When this happens an error is returned. - Setting the value to 0 also means unlimited. - Setting the value to 1 means a single execution attempt and no retries. - Setting the value to a negative integer results in an error when the execution is invoked. - **Use case:** Use this attribute to ensure that retries do not continue indefinitely. In most cases, we recommend using the Workflow Execution Timeout for [Workflows](/workflows) or the Schedule-To-Close Timeout for Activities to limit the total duration of retries, rather than using this attribute. ### Non-Retryable Errors {#non-retryable-errors} Non-Retryable Errors specify errors that shouldn't be retried. By default, none are specified. Errors are matched against the `type` field of the [Application Failure](/references/failures#application-failure). If one of those errors occurs, a retry does not occur. If you know of errors that should not trigger a retry, you can specify that and if they occur, the execution is not retried. #### Non-Retryable Errors for Activities When writing software applications, you will encounter three types of failures: transient, intermittent, and permanent. While transient and intermittent failures may resolve themselves upon retrying without further intervention, permanent failures will not. Permanent failures, by definition, require you to make some change to your logic or your input. Therefore, it is better to surface them than to retry them. Non-Retryable Errors are errors that will not be retried, regardless of a Retry Policy. To raise a non-retryable error, specify the `non_retryable` flag when raising an `ApplicationError`: ```ruby raise Temporalio::Error::ApplicationError.new( "Invalid credit card number: #{credit_card_number}", type: 'InvalidChargeAmount', non_retryable: true ) ``` This will designate the `ApplicationError` as non-retryable. To raise a non-retryable error, specify the `non_retryable` flag when raising an `ApplicationError`: ```python raise ApplicationError( f"Invalid credit card number: {credit_card_number}", type="InvalidChargeAmount", non_retryable=True, ) ``` This will designate the `ApplicationError` as non-retryable. To throw a non-retryable error, add `nonRetryable: true` to `ApplicationFailure.create({})`: ```typescript throw ApplicationFailure.create({ message: `Invalid charge amount: ${chargeAmount} (must be above zero)`, details: [chargeAmount], nonRetryable: true }); ``` This will designate the Error as non-retryable. To throw a non-retryable error, use the `newNonRetryableFailure` method: ```java throw ApplicationFailure.newNonRetryableFailure( "Invalid credit card number: " + creditCardNumber, InvalidChargeAmountException.class.getName() ); ``` This will designate the `ApplicationFailure` as non-retryable. To return a non-retryable error, replace your call to `NewApplicationError()` with `NewNonRetryableApplicationError()`: ```go temporal.NewNonRetryableApplicationError("Credit Card Charge Error", "CreditCardError", nil, nil) ``` This will designate the Error as non-retryable. To throw a non-retryable error, specify the `nonRetryable` flag when throwing an `ApplicationFailureException`: ```csharp var attempt = ActivityExecutionContext.Current.Info.Attempt; throw new ApplicationFailureException( $"Something bad happened on attempt {attempt}", errorType: "my_failure_type", nonRetryable: true ); ``` This will designate the `ApplicationFailureException` as non-retryable. Use non-retryable errors in your code sparingly. If you do not specify the failure as non-retryable within the definition, you can always mark that error type as non-retryable in your Activity's Retry Policy, but an `ApplicationError` with the `non_retryable` keyword argument set to `true` will always be non-retryable. If you do not specify the failure as non-retryable within the definition, you can always mark that error type as non-retryable in your Activity's Retry Policy, but an `ApplicationError` with the `non_retryable` keyword argument set to `True` will always be non-retryable. If you do not specify the failure as non-retryable within the definition, you can always mark that error type as non-retryable in your Activity's Retry Policy, but an error with `nonRetryable: true` set will always be non-retryable. If you throw a regular `newFailure()`, you can always mark that error _type_ as non-retryable in your Activity's Retry Policy, but a `newNonRetryableFailure()` will always be non-retryable. If you return a regular `NewApplicationError()`, you can always mark that error _type_ as non-retryable in your Activity's Retry Policy, but a `NewNonRetryableApplicationError()` will always be non-retryable. If you do not specify the failure as non-retryable within the definition, you can always mark that error type as non-retryable in your Activity's Retry Policy, but an `ApplicationFailureException` with the `nonRetryable` parameter set to `true` will always be non-retryable. For example, checking for bad input data is a reasonable time to use a non-retryable error. If the Activity cannot proceed with the input it has, that error should be surfaced immediately so that the input can be corrected on the next attempt. If responsibility for your application is distributed across multiple maintainers, or if you are developing a library to integrate into somebody else's application, you can think of the decision to hardcode non-retryable errors as following a "caller vs. implementer" dichotomy. Anyone who is calling your Activity would be able to make decisions about their Retry Policy, but only the implementer can decide whether an error should never be retryable out of the box. ## Retry interval The wait time before a retry is the _retry interval_. A retry interval is the smaller of two values: - The [Initial Interval](#initial-interval) multiplied by the [Backoff Coefficient](#backoff-coefficient) raised to the power of the number of retries. - The [Maximum Interval](#maximum-interval). ### Per-error next Retry delay Sometimes, your Activity or Workflow raises a special exception that needs a different retry interval from the Retry Policy. To accomplish this, you may throw an [Application Failure](/references/failures#application-failure) with the next Retry delay field set. This value will replace and override whatever the retry interval would be on the Retry Policy. Note that your retries will still cap out under the Retry Policy's Maximum Attempts, as well as overall timeouts. For an Activity, its Schedule-to-Close Timeout applies. For a Workflow, the Execution Timeout applies. ## Event History There are some subtle nuances to how Events are recorded to an Event History when a Retry Policy comes into play. - For an Activity Execution, the [ActivityTaskStarted](/references/events#activitytaskstarted) Event will not show up in the Workflow Execution Event History until the Activity Execution has completed or failed (having exhausted all retries). This is to avoid filling the Event History with noise. Use the Describe API to get a pending Activity Execution's attempt count. - For a Workflow Execution with a Retry Policy, if the Workflow Execution fails, the Workflow Execution will [Continue-As-New](/workflow-execution/continue-as-new) and the associated Event is written to the Event History. The [WorkflowExecutionContinuedAsNew](/references/events#workflowexecutioncontinuedasnew) Event will have an "initiator" field that will specify the Retry Policy as the value and the new Run Id for the next retry attempt. The new Workflow Execution is created immediately. But the first Workflow Task won't be scheduled until the backoff duration is exhausted. That duration is recorded as the `firstWorkflowTaskBackoff` field of the new run's `WorkflowExecutionStartedEventAttributes` event. --- ## About Temporal SDKs Temporal SDKs (software development kits) are an open source collection of tools, libraries, and APIs that enable Temporal Application development. They offer a [Temporal Client](#temporal-client) to interact with the [Temporal Service](/temporal-service), APIs to develop your [Temporal Application](#temporal-application), and APIs to run horizontally scalable [Workers](/workers#worker). SDKs are more than just a development tool, however. The SDK APIs enable developers to write code in a particular pattern that mirrors real world processes. The SDK's internal implementation, working in collaboration with the Temporal Service, steps through that code, guaranteeing execution progression during application runtime. ## Temporal Applications {#temporal-application} A Temporal Application is the code you write, comprised of [Workflow Definitions](/workflow-definition), [Activity Definitions](/workflow-definition), code used to configure [Temporal Clients](#temporal-client), and code used to configure and start [Workers](/workers#worker). Developers create Temporal Applications using an [official Temporal SDK](#official-sdks). Consider that the Workflow Definition code can be executed repeatedly. The Temporal Platform can concurrently support millions to billions of Workflow Executions, each of which representing an invoked Workflow Definition. Additionally, a Temporal Workflow Execution is both resumable and recoverable, and it can react to external events. - Resumable: The ability of a process to resume execution after suspending on an _awaitable_. - Recoverable: The ability of a process to resume execution after suspending due to a _failure_. - Reactive: The ability of a process to respond to external events. Hence, a Temporal Application can run for seconds or years in the presence of arbitrary load and failures. ## Official SDKs {#official-sdks} **What are the officially supported SDKs?** Each Temporal SDK targets a specific programming language. - [Go SDK feature guides](/develop/go) - [Java SDK feature guides](/develop/java) - [Python SDK feature guides](/develop/python/) - [TypeScript SDK feature guides](/develop/typescript/) - [.NET SDK feature guides](/develop/dotnet) - [Ruby SDK feature guides](/develop/ruby/) - [PHP SDK feature guides](/develop/php) Despite supporting multiple languages, and supporting many features, Temporal SDKs aim to make developers feel at home in their language. ### Third-party SDKs The following third-party SDKs exist but are not officially supported by Temporal: - [Swift](https://github.com/apple/swift-temporal-sdk) from [@Swift Community](https://github.com/apple) - [Haskell](https://github.com/MercuryTechnologies/hs-temporal-sdk) from [@MercuryTechnologies](https://github.com/MercuryTechnologies) - [Clojure](https://github.com/manetu/temporal-clojure-sdk) from [@Manetu](https://github.com/manetu) - [Scala](https://github.com/vitaliihonta/zio-temporal) from [@vitaliihonta](https://github.com/vitaliihonta) - [Ruby](https://github.com/coinbase/temporal-ruby) from [@coinbase](https://github.com/coinbase) ## Why use a Temporal SDK? {#why-use-an-sdk} Temporal SDKs empowers developers to concentrate on creating dependable and scalable business logic, alleviating the need to build home grown supervisor systems to ensure reliability and fault-tolerance. This is possible because the Temporal SDK provides a unified library that abstracts the intricacies of how Temporal handles distributed systems. ### Development pattern By abstracting complexities and streamlining boilerplate code, developers can craft straightforward code that directly aligns with their business logic, enhancing code readability and bolstering developer productivity. Consider a bank loan application. Developers can design the business logic of a bank loan using the Temporal SDK. The Workflow defines the overarching business logic, encompassing tasks such as validating applicant information, credit checks, loan approval, and applicant notifications, as Activities. :::caution Do not copy and use code The following is pseudocode. For tested samples see your language SDK's developer's guide. ::: ``` func LoanApplicationWorkflow { sdk.ExecuteActivity(CreditCheck) sdk.ExecuteActivity(AutomatedApproval) sdk.ExecuteActivity(NotifyApplicant) // ... } ``` For instance, Temporal SDKs have built-in support for handling failures, timeouts, and retries. In the event of an Activity failure, the SDK automatically initiates retries according to configurable policies established by the developer within the SDK. This streamlined process simplifies the integration of fault-tolerance mechanisms into applications. :::caution Do not copy and use code The following is pseudocode. For tested samples see your language SDK's developer's guide. ::: ``` func LoanApplicationWorkflow { options = { MaxAttempts: 3, StartToCloseTimeout: 30min, HeartbeatTimeout: 10min, } sdk.ExecuteActivity(CreditCheck, options) sdk.ExecuteActivity(AutomatedApproval) sdk.ExecuteActivity(NotifyApplicant) // ... } ``` ### Replays Another quality of the SDKs lies in their ability to replay Workflow Executions, a complex operation that contributes significantly to the Platform's promised reliability. We will delve into this idea more later, but for now, it signifies that the SDKs can automatically continue a process from the point of interruption, should a failure occur. This capability stems from the SDK's ability to persist each step the program takes. {/* - [Developing for Durable Execution using the Go SDK](/develop/go/durable-execution) */} ## Temporal SDKs major components {#major-components} **What are the major components of Temporal SDKs?** Temporal SDKs offer developers the following: - A Temporal Client to communicate with a Temporal Service - APIs to develop application code (Workflows & Activities) - APIs to configure and run Workers Let's break down each one. ### Temporal Client A Temporal Client acts as the bridge for communication between your applications and the Temporal Service. The Client performs key functions that facilitate the execution of, management of, and communication with Workflows. The most common operations that a Temporal Client enables you to perform are the following: - Get the result of Workflow Execution. - List Workflow Executions. - Query a Workflow Execution. - Signal a Workflow Execution. - Start a Workflow Execution. The following code is an example using the Go SDK. It showcases how to initialize a Temporal Client, create a connection to a local Temporal Service, and start a Workflow Execution: :::caution Do not copy and use code The following code is for example purposes only. For tested code samples and best practices, use your preferred language SDK's developer's guide. - [Go SDK Temporal Client feature guide](/develop/go/temporal-client) - [Java SDK Temporal Client feature guide](/develop/java/temporal-client) - [PHP SDK Temporal Client feature guide](/develop/php/temporal-client) - [Python SDK Temporal Client feature guide](/develop/python/temporal-client) - [TypeScript SDK Temporal Client feature guide](/develop/typescript/core-application) ::: ```go package main "context" "go.temporal.io/sdk/client" ) func main() { // Temporal Client setup code c, err := client.NewClient(client.Options{}) if err != nil { log.Fatalln("Unable to create client", err) } defer c.Close() // Prepare Workflow option and parameters workflowOptions := client.StartWorkflowOptions{ ID: "loan-application-1", TaskQueue: "loan-application-task-queue", } applicantDetails := ApplicantDetails{ // ... } // Start the Workflow workflowRun, err := c.ExecuteWorkflow(context.Background(), workflowOptions, "loan-application-workflow", applicantDetails) if err != nil { // ... } // ... } ``` Developers can then use the Client as the main entry point for interacting with the application through Temporal. Using that Client, developers may for example start or Signal Workflows, Query a Workflow's state, etc. We can see in the example above how the developer has used `ExecuteWorkflow` API to start a Workflow. ### APIs to Develop Workflows Workflows are defined as code: either a function or an object method, depending on the language. For example, the following is a valid Temporal Workflow in Go: :::caution Do not copy and use code The following code is for example purposes only. For tested code samples and best practices, use your preferred language SDK's developer's guide. ::: ```go func LoanApplication(ctx context.Context) (error) { // ... return nil } ``` The Workflow code uses Temporal SDK APIs to orchestrate the steps of the application. :::caution Do not copy and use code The following code is for example purposes only. For tested code samples and best practices, use your preferred language SDK's developer's guide. ::: ```go func LoanApplication(ctx workflow.Context, input *LoanApplicationWorkflowInput) (*LoanApplicationWorkflowResult, error) { // ... var result activities.CreditCheckResult f := workflow.ExecuteActivity(ctx, a.CreditCheck, CreditCheckInput(*input)) err := f.Get(ctx, &result) // ... // Return the results return &loanApplicationResults, nil } ``` A Workflow executes Activities (other functions that interact with external systems), handles and sends messages (Queries, Signals, Updates), and interacts with other Workflows. This Workflow code, while executing, can be paused, resumed, and migrated across physical machines without losing state. When a Workflow calls the API to execute an Activity, the Worker sends a [Command](https://docs.temporal.io/references/commands) back to the Temporal Service. The Temporal Service creates Activity Tasks in response which the same or a different Worker can then pick up and begin executing. In this way, the Worker and Temporal Service work together to incrementally execute Workflow code in a reliable way. We discuss this more in detail in [The SDK and Temporal Service relationship](/encyclopedia/temporal-sdks#sdk-and-cluster-relationship) section. The SDK APIs also enable developers to write code that more genuinely maps to their process. This is because without a specialized SDK, developers might have to write a lot of boilerplate code. This can lead to code that's hard to maintain, difficult to understand, or that doesn't directly correspond to the underlying business process. For example, the bank loan application Workflow might actually look like this: :::caution Do not copy and use code The following code is for example purposes only. For tested code samples and best practices, use your preferred language SDK's developer's guide. ::: ```go // LoanApplicationWorkflow is the workflow definition. func LoanApplicationWorkflow(ctx workflow.Context, applicantName string, loanAmount int) (string, error) { // Step 1: Notify the applicant that the application process has started err := workflow.ExecuteActivity(ctx, NotifyApplicantActivity, applicantName, "Application process started").Get(ctx, nil) if err != nil { return "", err } // Step 2: Perform a credit check var creditCheckResult string err = workflow.ExecuteActivity(ctx, LoanCreditCheckActivity, loanAmount).Get(ctx, &creditCheckResult) if err != nil { return "", err } // Step 3: Perform an automatic approval check var approvalCheckResult string err = workflow.ExecuteActivity(ctx, AutomaticApprovalCheckActivity, creditCheckResult).Get(ctx, &approvalCheckResult) if err != nil { return "", err } // Step 4: Notify the applicant of the decision var notificationResult string err = workflow.ExecuteActivity(ctx, NotifyApplicantActivity, applicantName, approvalCheckResult).Get(ctx, ¬ificationResult) if err != nil { return "", err } return notificationResult, nil } ``` The level of abstraction that APIs offer enables the developer to focus on business logic without having to worry about the intricacies of distributed computing such as retries, or having to explicitly maintain a state machine and the intermediate state for each step of the process. Additionally, the state of the Workflow is automatically persisted so if a failure does occur, it resumes right where it left off. ### APIs to create and manage Worker Processes Workers are responsible for executing Workflow and Activity code (application code). The SDK provides APIs for configuring and starting Workers, enabling developers to control how the code is executed. Workers are horizontally scalable, often run with systems like Kubernetes, and configured according to the application's needs. Here is an example of how you could initialize a Worker using the Go SDK. :::caution Do not copy and use code The following code is for example purposes only. For tested code samples and best practices, use your preferred language SDK's developer's guide. ::: ```go func main() { // Create the client object just once per process c, err := client.NewClient(client.Options{}) if err != nil { log.Fatalln("Unable to create Temporal client", err) } defer c.Close() // Create the Worker instance w := worker.New(c, "loan-application-task-queue", worker.Options{}) // Register the workflow and activity with the worker w.RegisterWorkflow(LoanApplicationWorkflow) w.RegisterActivity(LoanCreditCheck) // Start listening to the Task Queue err = w.Run(worker.InterruptCh()) if err != nil { log.Fatalln("Unable to start Worker", err) } } ``` The Worker polls on the specified Task Queue, processing those Tasks, and reporting the results back to the Temporal Service. They execute both the Workflows and Activities, and the SDK ensures that they perform these tasks efficiently and reliably. ### APIs to customize Activity Execution behavior Activities in Temporal are individual units of work that often represent non-deterministic parts of the code logic, such as querying a database or calling an external service. The SDK provides APIs to customize the behavior of an Activity Execution. By default, if an Activity attempts to communicate with another system and encounters a transient failure like a network issue, Temporal ensures the Activity is tried again automatically. However, Temporal enables developers to control a variety of timeouts, a Retry Policy, Heartbeat monitoring, and asynchronous completion. The following code is an example of a custom set of Activity Execution options that affect the timeout and retry behavior of the execution, should the Activity encounter a failure. :::caution Do not copy and use code The following code is for example purposes only. For tested code samples and best practices, use your preferred language SDK's developer's guide. ::: ```go // LoanApplicationWorkflow is the Workflow Definition. func LoanApplicationWorkflow(ctx workflow.Context, applicantName string, loanAmount int) (string, error) { // ... var creditCheckResult string // set a Retry Policy ao := workflow.ActivityOptions{ ScheduleToCloseTimeout: time.Hour, HeartbeatTimeout: time.Minute, RetryPolicy: &temporal.RetryPolicy{ InitialInterval: time.Second, BackoffCoefficient: 2, MaximumInterval: time.Minute, MaximumAttempts: 5, }, } ctx = workflow.WithActivityOptions(ctx, ao) err = workflow.ExecuteActivity(ctx, LoanCreditCheckActivity, loanAmount).Get(ctx, &creditCheckResult) if err != nil { return "", err } // ... return notificationResult, nil } // LoanCreditCheckActivity is an Activity function that performs a credit check. func LoanCreditCheckActivity(ctx context.Context, loanAmount int) (string, error) { // ... your logic here ... return "Credit check passed", nil } ``` ## The SDK and Temporal Service relationship {#sdk-and-cluster-relationship} **How do the Temporal SDKs work with the Temporal Service?** The Temporal Service functions more as a choreographer than a conductor. Rather than directly assigning tasks to Workers, the Temporal Service arranges the Tasks into a Task Queue while Workers poll the Task Queue. Developers may create a fleet of Workers and tune them so that a Task is picked up as soon as it is available. If a Worker goes down, Tasks can wait until the next Worker is available. A Workflow might request to execute an Activity, start a Timer, or start a Child Workflow, each of which translates into a Command, dispatched to the Temporal Service. In addition to acting on these Commands, the Temporal Service documents that interaction by appending their corresponding Events into to the Workflow Execution's Event History. Take for instance the call to execute an Activity. When a Workflow invokes it, the Worker doesn't immediately execute that Activity code. Instead, it generates a ScheduleActivityTask Command, dispatching it to the Cluster. In response, the Cluster queues up a new Activity Task. Only when a Worker finds itself free, it collects the task and begins executing the Activity code. The Temporal Service persists Workflow Execution Event History, so that if there is a failure, the SDK Worker is able to Replay the execution and resume where it left off. This is where the deterministic constraints of the Workflow code comes into play, requiring the use of Activities to create side effects and interact with the outside world. Let's look at an example Workflow with a single Activity. ```go func LoanApplication(ctx workflow.Context, input *LoanApplicationWorkflowInput) (*LoanApplicationWorkflowResult, error) { ctx = workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ StartToCloseTimeout: time.Minute, }) var result activities.NotifyApplicantActivityResult f := workflow.ExecuteActivity(ctx, a.NotifyApplicantActivity, NotifyApplicantActivityInput(*input)) err := f.Get(ctx, &result) // Return the results return &l.LoanApplicationState, nil } type Activities struct {} func (a *Activities) NotifyApplicantActivity(ctx context.Context, input *NotifyApplicantActivityInput) (*NotifyApplicantActivityResult, error) { var result NotifyApplicantActivityResult // Call the thirdparty API and handle the result return &result, err } ``` The Activity above is performing a single call to an external API. Since the call can fail due to transient issues, we define it outside of the Workflow and provide it with retry options. When you create a new Worker process, the Worker creates a long-lasting connection to the Temporal Service, polling a Task Queue for Tasks that related to the code it is capable of executing. Although the Worker is now running, unless a Workflow is explicitly started, the Task Queue doesn't have any Tasks on it and so, no code executes. We can use a Temporal Client (available in Temporal SDKs and the Temporal CLI) to start a new Workflow. Starting a Workflow Execution creates a new Event, WorkflowExecutionStarted, and adds it to the Workflow Execution's Event History. The Temporal Service then schedules a Workflow Task by adding it to the Task Queue. When the Worker has capacity, it picks up this Task, and begin executing code. Each step of the Task (e.g. Scheduled, Started, and Completed), gets recorded into the Event History. - Scheduled means that the Temporal Service has added a Task to the Task Queue. - Started means that the Worker has dequeued the Task. - Completed means that the Worker finished executing the Task by responding to the Temporal Service. When the call to invoke the Activity is evaluated, the Worker suspends executing the code and sends a Command to the Temporal Service to schedule an Activity Task. When the Worker process can perform more work, it picks up the Activity Task and begins executing the Activity code, which includes the call to the external API. If the Activity fails, say the API goes down, Temporal will automatically retry the Activity with one second between intervals, as the configurations have defined, an infinite amount of times until the Activity succeeds or is canceled. In the case where the calls succeeds, and the code completes, the Worker tells the Temporal Service the Activity Task completed. Included is any data that was returned from the Activity (results of the API call), which is then persisted in the Workflow Execution Event History, and is now accessible to the Workflow code. The Temporal Service creates a new Workflow Task which the Worker picks up. This is when the SDK Worker Replays the Workflow code, uses the Event History as guidance on what to expect. If the Replay encounters an Event that doesn't match up with what is expected from the code, a [non-determinism](/references/errors#non-deterministic-error) error gets thrown. If there is alignment, the Worker continues evaluating code. Assuming the Activity Execution is successful, the Workflow now has the result of the Activity and the Worker is able to finish evaluating and executing the Workflow code, responding to the Temporal Service when complete. The result of the Workflow can now be retrieved using a Temporal Client. And that’s how a Temporal Worker and Temporal Service work together. --- ## Archival This page discusses [Archival](#archival). ## What is Archival? {#archival} Archival is a feature that automatically backs up [Event Histories](/workflow-execution/event#event-history) and Visibility records from Temporal Service persistence to a custom blob store. - [How to create a custom Archiver](/self-hosted-guide/archival#custom-archiver) - [How to set up Archival](/self-hosted-guide/archival#set-up-archival) Workflow Execution Event Histories are backed up after the [Retention Period](/temporal-service/temporal-server#retention-period) is reached. Visibility records are backed up immediately after a Workflow Execution reaches a Closed status. Archival enables Workflow Execution data to persist as long as needed, while not overwhelming the Temporal Service's persistence store. This feature is helpful for compliance and debugging. Temporal's Archival feature is considered **experimental** and not subject to normal [versioning and support policy](/temporal-service/temporal-server#versions-and-support). Archival is not supported when running Temporal through Docker. It's disabled by default when installing the system manually and when deploying through [helm charts](https://github.com/temporalio/helm-charts/blob/main/charts/temporal/templates/server-configmap.yaml). It can be enabled in the [config](https://github.com/temporalio/temporal/blob/main/config/development.yaml). --- ## Multi-Cluster Replication This page discusses the following: - [Multi-Cluster Replication](#multi-cluster-replication) - [Namespace Versions](#namespace-versions) - [Version History](#version-history) - [Conflict Resolution](#conflict-resolution) - [Zombie Workflows](#zombie-workflows) - [Workflow Task Processing](#workflow-task-processing) ## What is Multi-Cluster Replication? {#multi-cluster-replication} Multi-Cluster Replication is a feature which asynchronously replicates Workflow Executions from active Clusters to other passive Clusters, for backup and state reconstruction. When necessary, for higher availability, Cluster operators can failover to any of the backup Clusters. Temporal's Multi-Cluster Replication feature is considered **experimental** and not subject to normal [versioning and support policy](/temporal-service/temporal-server#versions-and-support). Temporal automatically forwards Start, Signal, and Query requests to the active Cluster. This feature must be enabled through a Dynamic Config flag per [Global Namespace](/global-namespace). When the feature is enabled, Tasks are sent to the Parent Task Queue partition that matches that Namespace, if it exists. All Visibility APIs can be used against active and standby Clusters. This enables [Temporal UI](https://docs.temporal.io/web-ui) to work seamlessly for Global Namespaces. Applications making API calls directly to the Temporal Visibility API continue to work even if a Global Namespace is in standby mode. However, they might see a lag due to replication delay when querying the Workflow Execution state from a standby Cluster. ## Namespace Versions A _version_ is a concept in Multi-Cluster Replication that describes the chronological order of events per Namespace. With Multi-Cluster Replication, all Namespace change events and Workflow Execution History events are replicated asynchronously for high throughput. This means that data across clusters is **not** strongly consistent. To guarantee that Namespace data and Workflow Execution data will achieve eventual consistency (especially when there is a data conflict during a failover), a **version** is introduced and attached to Namespaces. All Workflow Execution History entries generated in a Namespace will also come with the version attached to that Namespace. All participating Clusters are pre-configured with a unique initial version and a shared version increment: - `initial version < shared version increment` When performing failover for a Namespace from one Cluster to another Cluster, the version attached to the Namespace will be changed by the following rule: - for all versions which follow `version % (shared version increment) == (active cluster's initial version)`, find the smallest version which has `version >= old version in namespace` When there is a data conflict, a comparison will be made and Workflow Execution History entries with the highest version will be considered the source of truth. When a cluster is trying to mutate a Workflow Execution History, the version will be checked. A cluster can mutate a Workflow Execution History only if the following is true: - The version in the Namespace belongs to this cluster, i.e. `(version in namespace) % (shared version increment) == (this cluster's initial version)` - The version of this Workflow Execution History's last entry (event) is equal or less than the version in the Namespace, i.e. `(last event's version) <= (version in namespace)`
Namespace version change example Assuming the following scenario: - Cluster A comes with initial version: 1 - Cluster B comes with initial version: 2 - Shared version increment: 10 T = 0: Namespace α is registered, with active Cluster set to Cluster A ``` namespace α's version is 1 all workflows events generated within this namespace, will come with version 1 ``` T = 1: namespace β is registered, with active Cluster set to Cluster B ``` namespace β's version is 2 all workflows events generated within this namespace, will come with version 2 ``` T = 2: Namespace α is updated to with active Cluster set to Cluster B ``` namespace α's version is 2 all workflows events generated within this namespace, will come with version 2 ``` T = 3: Namespace β is updated to with active Cluster set to Cluster A ``` namespace β's version is 11 all workflows events generated within this namespace, will come with version 11 ```
## Version history Version history is a concept which provides a high level summary of version information in regards to Workflow Execution History. Whenever there is a new Workflow Execution History entry generated, the version from Namespace will be attached. The Workflow Executions's mutable state will keep track of all history entries (events) and the corresponding version.
Version history example (without data conflict) - Cluster A comes with initial version: 1 - Cluster B comes with initial version: 2 - Shared version increment: 10 T = 0: adding event with event ID == 1 & version == 1 View in both Cluster A & B ``` | -------- | ------------- | --------------- | ------- | | Events | Version History | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 1 | 1 | | -------- | ------------- | --------------- | ------- | ``` T = 1: adding event with event ID == 2 & version == 1 View in both Cluster A & B ``` | -------- | ------------- | --------------- | ------- | | Events | Version History | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 2 | 1 | | 2 | 1 | | | | -------- | ------------- | --------------- | ------- | ``` T = 2: adding event with event ID == 3 & version == 1 View in both Cluster A & B ``` | -------- | ------------- | --------------- | ------- | | Events | Version History | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 3 | 1 | | 2 | 1 | | | | 3 | 1 | | | | -------- | ------------- | --------------- | ------- | ``` T = 3: Namespace failover triggered, Namespace version is now 2 adding event with event ID == 4 & version == 2 View in both Cluster A & B ``` | -------- | ------------- | --------------- | ------- | | Events | Version History | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 3 | 1 | | 2 | 1 | 4 | 2 | | 3 | 1 | | | | 4 | 2 | | | | -------- | ------------- | --------------- | ------- | ``` T = 4: adding event with event ID == 5 & version == 2 View in both Cluster A & B ``` | -------- | ------------- | --------------- | ------- | | Events | Version History | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 3 | 1 | | 2 | 1 | 5 | 2 | | 3 | 1 | | | | 4 | 2 | | | | 5 | 2 | | | | -------- | ------------- | --------------- | ------- | ```
Since Temporal is AP, during failover (change of active Temporal Service Namespace), there can exist cases where more than one Cluster can modify a Workflow Execution, causing divergence of Workflow Execution History. Below shows how the version history will look like under such conditions.
Version history example (with data conflict) Below, shows version history of the same Workflow Execution in 2 different Clusters. - Cluster A comes with initial version: 1 - Cluster B comes with initial version: 2 - Cluster C comes with initial version: 3 - Shared version increment: 10 T = 0: View in both Cluster B & C ``` | -------- | ------------- | --------------- | ------- | | Events | Version History | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 2 | 1 | | 2 | 1 | 3 | 2 | | 3 | 2 | | | | -------- | ------------- | --------------- | ------- | ``` T = 1: adding event with event ID == 4 & version == 2 in Cluster B ``` | -------- | ------------- | --------------- | ------- | | Events | Version History | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 2 | 1 | | 2 | 1 | 4 | 2 | | 3 | 2 | | | | 4 | 2 | | | | -------- | ------------- | --------------- | ------- | ``` T = 1: namespace failover to Cluster C, adding event with event ID == 4 & version == 3 in Cluster C ``` | -------- | ------------- | --------------- | ------- | | Events | Version History | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 2 | 1 | | 2 | 1 | 3 | 2 | | 3 | 2 | 4 | 3 | | 4 | 3 | | | | -------- | ------------- | --------------- | ------- | ``` T = 2: replication task from Cluster C arrives in Cluster B Note: below are a tree structures ``` | -------- | ------------- | | Events | | ------------- | ------------- | | Event ID | Event Version | | -------- | ------------- | | 1 | 1 | | 2 | 1 | | 3 | 2 | | -------- | ------------- | | | | ------------- | ------------ | | | | -------- | ------------- | | -------- | ------------- | | Event ID | Event Version | | Event ID | Event Version | | -------- | ------------- | | -------- | ------------- | | 4 | 2 | | 4 | 3 | | -------- | ------------- | | -------- | ------------- | | --------------- | ------- | | Version History | | --------------- | ------------------- | | Event ID | Version | | --------------- | ------- | | 2 | 1 | | 3 | 2 | | --------------- | ------- | | | | ------- | ------------------- | | | | --------------- | ------- | | --------------- | ------- | | Event ID | Version | | Event ID | Version | | --------------- | ------- | | --------------- | ------- | | 4 | 2 | | 4 | 3 | | --------------- | ------- | | --------------- | ------- | ``` T = 2: replication task from Cluster B arrives in Cluster C, same as above
## Conflict resolution When a Workflow Execution History diverges, proper conflict resolution is applied. In Multi-cluster Replication, Workflow Execution History Events are modeled as a tree, as shown in the second example in [Version History](#version-history). Workflow Execution Histories that diverge will have more than one history branch. Among all history branches, the history branch with the highest version is considered the `current branch` and the Workflow Execution's mutable state is a summary of the current branch. Whenever there is a switch between Workflow Execution History branches, a complete rebuild of the Workflow Execution's mutable state will occur. Temporal Multi-Cluster Replication relies on asynchronous replication of Events across Clusters, so in the case of a failover it is possible to have an Activity Task dispatched again to the newly active Cluster due to a replication task lag. This also means that whenever a Workflow Execution is updated after a failover by the new Cluster, any previous replication tasks for that Execution cannot be applied. This results in loss of some progress made by the Workflow Execution in the previous active Cluster. During such conflict resolution, Temporal re-injects any external Events like Signals in the new Event History before discarding replication tasks. Even though some progress could roll back during failovers, Temporal provides the guarantee that Workflow Executions won't get stuck and will continue to make forward progress. Activity Execution completions are not forwarded across Clusters. Any outstanding Activities will eventually time out based on the configuration. Your application should have retry logic in place so that the Activity gets retried and dispatched again to a Worker after the failover to the new Cluster. Handling this is similar to handling an Activity Task timeout caused by a Worker restarting. ## Zombie Workflows There is an existing contract that for any Namespace and Workflow Id combination, there can be at most one run (Namespace + Workflow Id + Run Id) open / executing. Multi-cluster Replication aims to keep the Workflow Execution History as up-to-date as possible among all participating Clusters. Due to the nature of Multi-cluster Replication (for example, Workflow Execution History events are replicated asynchronously) different Runs (same Namespace and Workflow Id) can arrive at the target Cluster at different times, sometimes out of order, as shown below: ``` | ------------- | | ------------- | | ------------- | | Cluster A | | Network Layer | | Cluster B | | --------- || ------------- | | ------------- | | | | | Run 1 Replication Events | | | -----------------------> | | | | | | Run 2 Replication Events | | | -----------------------> | | | | | | | | | | | | | Run 2 Replication Events | | | -----------------------> | | | | | | Run 1 Replication Events | | | -----------------------> | | | | | --- || ------------- | | ------------- | | Cluster A | | Network Layer | | Cluster B | | --------- || ------------- | | ------------- | ``` Because Run 2 appears in Cluster B first, Run 1 cannot be replicated as "runnable" due to the rule `at most one Run open` (see above), thus the "zombie" Workflow Execution state is introduced. A "zombie" state is one in which a Workflow Execution which cannot be actively mutated by a Cluster (assuming the corresponding Namespace is active in this Cluster). A zombie Workflow Execution can only be changed by a replication Task. Run 1 will be replicated similar to Run 2, except when Run 1's execution will become a "zombie" before Run 1 reaches completion. ## Workflow Task processing In the context of Multi-cluster Replication, a Workflow Execution's mutable state is an entity which tracks all pending tasks. Prior to the introduction of Multi-cluster Replication, Workflow Execution History entries (events) are from a single branch, and the Temporal Server will only append new entries (events) to the Workflow Execution History. After the introduction of Multi-cluster Replication, it is possible that a Workflow Execution can have multiple Workflow Execution History branches. Tasks generated according to one history branch may become invalidated by switching history branches during conflict resolution. Example: T = 0: task A is generated according to Event Id: 4, version: 2 ``` | -------- | ------------- | | Events | | -------- | ------------- | | Event ID | Event Version | | -------- | ------------- | | 1 | 1 | | 2 | 1 | | 3 | 2 | | -------- | ------------- | | | | | | -------- | ------------- | | Event ID | Event Version | | -------- | ------------- | | 4 | 2 | <-- task A belongs to this event | | -------- | ------------- | ``` T = 1: conflict resolution happens, Workflow Execution's mutable state is rebuilt and history Event Id: 4, version: 3 is written down to persistence ``` | -------- | ------------- | | Events | | ------------- | -------------------------------------------- | | Event ID | Event Version | | -------- | ------------- | | 1 | 1 | | 2 | 1 | | 3 | 2 | | -------- | ------------- | | | | ------------- | -------------------------------------------- | | | | -------- | ------------- | | -------- | ------------- | | Event ID | Event Version | | Event ID | Event Version | | -------- | ------------- | | -------- | ------------- | | 4 | 2 | <-- task A belongs to this event | 4 | 3 | <-- current branch / mutable state | | -------- | ------------- | | -------- | ------------- | ``` T = 2: task A is loaded. At this time, due to the rebuild of a Workflow Execution's mutable state (conflict resolution), Task A is no longer relevant (Task A's corresponding Event belongs to non-current branch). Task processing logic will verify both the Event Id and version of the Task against a corresponding Workflow Execution's mutable state, then discard task A. --- ## Persistence This page discusses the following: - [Persistence](#persistence) - [Dependency Versions](#dependency-versions) ## What is Persistence? {#persistence} The Temporal Persistence store is a database used by the [Temporal Server](/temporal-service/temporal-server) to persist events generated and processed in your Temporal Service and SDK. A Temporal Service's only required dependency for basic operation is the Persistence database. Multiple types of databases are supported. The database stores the following types of data: - Tasks: Tasks to be dispatched. - State of Workflow Executions: - Execution table: A capture of the mutable state of Workflow Executions. - History table: An append-only log of Workflow Execution History Events. - Namespace metadata: Metadata of each Namespace in the Temporal Service. - [Visibility](/temporal-service/visibility) data: Enables operations like "show all running Workflow Executions". For production environments, we recommend using Elasticsearch as your Visibility store. An Elasticsearch database must be configured in a self-hosted Temporal Service to enable [advanced Visibility](/visibility#advanced-visibility) on Temporal Server versions 1.19.1 and earlier. With Temporal Server version 1.20 and later, advanced Visibility features are available on SQL databases like MySQL (version 8.0.17 and later), PostgreSQL (version 12 and later), SQLite (v3.31.0 and later), and Elasticsearch. ### Dependency versions Temporal tests compatibility by spanning the minimum and maximum stable major versions for each supported database. The following versions are used in our test pipelines and actively tested before we release any version of Temporal: - **Cassandra v3.11 and v4.0** - **PostgreSQL 13.18, 14.15, 15.10 and 16.6** - **MySQL v5.7 and v8.0** (specifically 8.0.19+ due to a bug) You can verify supported databases in the [Temporal Server release notes](https://github.com/temporalio/temporal/releases). - Because Temporal Server primarily relies on core database functionality, we do not expect compatibility to break often. {/* Temporal has no opinions on database upgrade paths; as long as you can upgrade your database according to each project's specifications, Temporal should work with any version within supported ranges. */} - We do not run tests with vendors like Vitess and CockroachDB. - Temporal also supports SQLite v3.x persistence, but this is meant only for development and testing, not production usage. --- ## Temporal Server This page discusses the following: - [Frontend Service](#frontend-service) - [History Service](#history-service) - [History Shard](#history-shard) - [Matching Service](#matching-service) - [Worker Service](#worker-service) - [Retention Period](#retention-period) ## What is the Temporal Server? {#temporal-server} The Temporal Server consists of four independently scalable services: - Frontend gateway: for rate limiting, routing, authorizing. - History subsystem: maintains data (mutable state, queues, and timers). - Matching subsystem: hosts Task Queues for dispatching. - Worker Service: for internal background Workflows. For example, a real-life production deployment can have 5 Frontend, 15 History, 17 Matching, and 3 Worker Services per Temporal Service. The Temporal Server services can run independently or be grouped together into shared processes on one or more physical or virtual machines. For live (production) environments, we recommend that each service runs independently, because each one has different scaling requirements and troubleshooting becomes easier. The History, Matching, and Worker Services can scale horizontally within a Temporal Service. The Frontend Service scales differently than the others because it has no sharding or partitioning; it is just stateless. Each service is aware of the others, including scaled instances, through a membership protocol via [Ringpop](https://github.com/temporalio/ringpop-go). ### Versions and support :::tip We release new versions of the Temporal SDKs and Temporal Server software independently of one another. That said, All SDK versions support all server versions. To take advantage of bug fixes, performance improvements, and new features, please upgrade both SDK and servers to the latest versions on a regular cadence. ::: All Temporal Server releases abide by the [Semantic Versioning Specification](https://semver.org/). We support upgrade paths from every version beginning with Temporal v1.7.0. For details on upgrading your Temporal Service, see [Upgrade Server](/self-hosted-guide/upgrade-server#upgrade-server). We provide maintenance support for previously published minor and major versions by continuing to release critical bug fixes related to security, the prevention of data loss, and reliability, whenever they are found. We aim to publish incremental upgrade guides for each minor and major version, which include specifics about dependency upgrades that we have tested for (such as Cassandra 3.0 -> 3.11). We offer maintenance support of the last three **minor** versions after a release and do not plan to "backport" patches beyond that. We offer maintenance support of **major** versions for at least 12 months after a GA release, and we provide at least 6 months' notice before EOL/deprecating support. **Dependencies** Temporal offers official support for, and is tested against, dependencies with the exact versions described in the `go.mod` file of the corresponding release tag. (For example, [v1.5.1](https://github.com/temporalio/temporal/tree/v1.5.1) dependencies are documented in [the go.mod for v1.5.1](https://github.com/temporalio/temporal/blob/v1.5.1/go.mod).) ## What is a Frontend Service? {#frontend-service} The Frontend Service is a stateless gateway service that exposes a strongly typed [Proto API](https://github.com/temporalio/api/blob/master/temporal/api/workflowservice/v1/service.proto). The Frontend Service is responsible for rate limiting, authorizing, validating, and routing all inbound calls. Types of inbound calls include the following: - [Namespace](/namespaces) CRUD - External events - Worker polls - [Visibility](/temporal-service/visibility) requests - [Temporal CLI](/cli) (the Temporal CLI) operations - Calls from a remote Temporal Service related to [Multi-Cluster Replication](/temporal-service/multi-cluster-replication) Every inbound request related to a Workflow Execution must have a Workflow Id, which is hashed for routing purposes. The Frontend Service has access to the hash rings that maintain service membership information, including how many nodes (instances of each service) are in the Temporal Service. Inbound call rate limiting is applied per host and per namespace. The Frontend Service talks to the Matching Service, History Service, Worker Service, the database, and Elasticsearch (if in use). - It uses the grpcPort 7233 to host the service handler. - It uses port 6933 for membership-related communication. Ports are configurable in the Temporal Service configuration. ## What is a History Service? {#history-service} The History Service is responsible for persisting Workflow Execution state to the Workflow History. When the Workflow Execution is able to progress, the History Service adds a Task with the Workflow's updated history to the Task Queue. From there, a Worker can poll for work, receive this updated history, and resume execution. The total number of History Service processes can be between 1 and the total number of [History Shards](#history-shard). An individual History Service can support many History Shards. Temporal recommends starting at a ratio of 1 History Service process for every 500 History Shards. Although the total number of History Shards remains static for the life of the Temporal Service, the number of History Service processes can change. The History Service talks to the Matching Service and the database. - It uses grpcPort 7234 to host the service handler. - It uses port 6934 for membership-related communication. Ports are configurable in the Temporal Service configuration. ### What is a History Shard? {#history-shard} A History Shard is an important unit within a Temporal Service by which concurrent Workflow Execution throughput can be scaled. Each History Shard maps to a single persistence partition. A History Shard assumes that only one concurrent operation can be within a partition at a time. In essence, the number of History Shards represents the number of concurrent database operations that can occur for a Temporal Service. This means that the number of History Shards in a Temporal Service plays a significant role in the performance of your Temporal Application. Before integrating a database, the total number of History Shards for the Temporal Service must be chosen and set in the Temporal Service's configuration (see [persistence](/references/configuration#persistence)). After the Shard count is configured and the database integrated, the total number of History Shards for the Temporal Service cannot be changed. In theory, a Temporal Service can operate with an unlimited number of History Shards, but each History Shard adds compute overhead to the Temporal Service. The Temporal Service has operated successfully using anywhere from 1 to 128K History Shards, with each Shard responsible for tens of thousands of Workflow Executions. One Shard is useful only in small scale setups designed for testing, while 128k Shards is useful only in very large scale production environments. The correct number of History Shards for any given Temporal Service depends entirely on the Temporal Application that it is supporting and the type of database. A History Shard is represented as a hashed integer. Each Workflow Execution is automatically assigned to a History Shard. The assignment algorithm hashes Workflow Execution metadata such as Workflow Id and Namespace and uses that value to match a History Shard. Each History Shard maintains the Workflow Execution Event History, Workflow Execution mutable state, and the following internal Task Queues: - Internal Transfer Task Queue: Transfers internal tasks to the Matching Service. Whenever a new Workflow Task needs to be scheduled, the History Service's Transfer Task Queue Processor transactionally dispatches it to the Matching Service. - Internal Timer Task Queue: Durably persists Timers. - Internal Replicator Task Queue: Asynchronously replicates Workflow Executions from active Clusters to other passive Clusters. (Relies on the experimental Multi-Cluster feature.) - Internal Visibility Task Queue: Pushes data to the [Advanced Visibility](/visibility#advanced-visibility) index. ## What is a Matching Service? {#matching-service} The Matching Service is responsible for hosting user-facing [Task Queues](/task-queue) for Task dispatching. It is responsible for matching Workers to Tasks and routing new Tasks to the appropriate queue. This service can scale internally by having multiple instances. It talks to the Frontend Service, History Service, and the database. - It uses grpcPort 7235 to host the service handler. - It uses port 6935 for membership related communication. Ports are configurable in the Temporal Service configuration. ## What is a Worker Service? {#worker-service} The Worker Service runs background processing for the replication queue, system Workflows, and (in versions older than 1.5.0) the Kafka visibility processor. Worker Service It talks to the Frontend Service. - It uses port 6939 for membership-related communication. Ports are configurable in the Temporal Service configuration. ## What is a Retention Period? {#retention-period} Retention Period is the duration for which the Temporal Service stores data associated with closed Workflow Executions on a Namespace in the Persistence store. - [How to set the Retention Period for a Namespace](/cli/operator#create) - [How to set the Retention Period for a Namespace using the Go SDK](/develop/go/namespaces) - [How to set the Retention Period for a Namespace using the Java SDK](/develop/java/namespaces) A Retention Period applies to all closed Workflow Executions within a [Namespace](/namespaces) and is set when the Namespace is registered. The Temporal Service triggers a Timer task at the end of the Retention Period that cleans up the data associated with the closed Workflow Execution on that Namespace. The minimum Retention Period is 1 day. On Temporal Service version 1.18 and later, the maximum Retention Period value for Namespaces can be set to anything over the minimum requirement of 1 day. Ensure that your Persistence store has enough capacity for the storage. On Temporal Service versions 1.17 and earlier, the maximum Retention Period you can set is 30 days. Setting the Retention Period to 0 results in the error _A valid retention period is not set on request_. If you don't set the Retention Period value when using the [`temporal operator namespace create`](/cli/operator#create) command, it defaults to 3 days. If you don't set the Retention Period value when using the Register Namespace Request API, it returns an error. When changing the Retention Period (with [`temporal operator namespace update`](/cli/operator#update) or the `UpdateNamespace` API), the new duration applies to Workflow Executions that close after the change is saved. :::info Changing the Retention Period does NOT affect existing closed Workflow Executions: they retain their original cleanup timers based on the Retention Period that was in effect when they closed. ::: ### Manual cleanup of closed Workflow Executions For cases where you need to remove closed Workflow Executions before their retention timer expires, you can use [`temporal workflow delete`](/cli/workflow#delete) or the `DeleteWorkflowExecution` command. This is particularly useful along with reducing the Retention Period to clean up previously closed Workflow Executions to reduce storage costs. --- ## Temporal Service configuration This page discusses the following: - [Static Configuration](#static-configuration) - [Dynamic Configuration](#dynamic-configuration) - [Security Configuration](#temporal-cluster-security-configuration) - [Observability](#monitoring-and-observation) ## What is Temporal Service configuration? {#cluster-configuration} Temporal Service configuration is the setup and configuration details of your self-hosted Temporal Service, defined using YAML. You must define your Temporal Service configuration when setting up your self-hosted Temporal Service. For details on using Temporal Cloud, see [Temporal Cloud documentation](/cloud). Temporal Service configuration is composed of two types of configuration: [Static configuration](#static-configuration) and [Dynamic configuration](#dynamic-configuration). ### Static configuration Static configuration contains details of how the Temporal Service should be set up. The static configuration is read just once and used to configure service nodes at startup. Depending on how you want to deploy your self-hosted Temporal Service, your static configuration must contain details for setting up: - Temporal Services—Frontend, History, Matching, Worker - Membership ports for the Temporal Services - Persistence (including History Shard count), Visibility, Archival store setups. - TLS, authentication, authorization - Server log level - Metrics - Temporal Service metadata - Dynamic config Client Static configuration values cannot be changed at runtime. Some values, such as the Metrics configuration or Server log level can be changed in the static configuration but require restarting the Temporal Service for the changes to take effect. For details on static configuration keys, see [Temporal Service configuration reference](/references/configuration). For static configuration examples, see [https://github.com/temporalio/temporal/tree/master/config](https://github.com/temporalio/temporal/tree/master/config). ### Dynamic configuration Dynamic configuration contains configuration keys that you can update in your Temporal Service setup without having to restart the server processes. All dynamic configuration keys provided by Temporal have default values that are used by the Temporal Service. You can override the default values by setting different values for the keys in a YAML file and setting the [dynamic configuration client](/references/configuration#dynamicconfigclient) to poll this file for updates. Setting dynamic configuration for your Temporal Service is optional. Setting overrides for some configuration keys updates the Temporal Service configuration immediately. However, for configuration fields that are checked at startup (such as thread pool size), you must restart the server for the changes to take effect. Use dynamic configuration keys to fine-tune your self-deployed Temporal Service setup. For details on dynamic configuration keys, see [Dynamic configuration reference](/references/dynamic-configuration). For dynamic configuration examples, see [https://github.com/temporalio/temporal/tree/master/config/dynamicconfig](https://github.com/temporalio/temporal/tree/master/config/dynamicconfig). ## What is Temporal Service security configuration? {#temporal-cluster-security-configuration} Secure your Temporal Service (self-hosted and Temporal Cloud) by encrypting your network communication and setting authentication and authorization protocols for API calls. For details on setting up your Temporal Service security, see [Temporal Platform security features](/security). ### mTLS encryption Temporal supports Mutual Transport Layer Security (mTLS) to encrypt network traffic between services within a Temporal Service, or between application processes and a Temporal Service. On the self-hosted Temporal Service, configure mTLS in the `tls` section of the [Temporal Service configuration](/references/configuration#tls). mTLS configuration is a [static configuration](#static-configuration) property. You can then use either the [`WithConfig`](/references/server-options#withconfig) or [`WithConfigLoader`](/references/server-options#withconfigloader) server option to start your Temporal Service with this configuration. The mTLS configuration includes two sections that serve to separate communication within a Temporal Service and client calls made from your application to the Temporal Service. - `internode`: configuration for encrypting communication between nodes within the Temporal Service. - `frontend`: configuration for encrypting the public endpoints of the Frontend Service. Setting mTLS for `internode` and `frontend` separately lets you use different certificates and settings to encrypt each section of traffic. ### Using certificates for Client connections Use CA certificates to authenticate client connections to your Temporal Service. On Temporal Cloud, you can [set your CA certificates in your Temporal Cloud settings](/cloud/certificates) and use the end-entity certificates in your client calls. On the self-hosted Temporal Service, you can restrict access to Temporal Service endpoints by using the `clientCAFiles` or `clientCAData` property and the [`requireClientAuth`](/references/configuration#tls) property in your Temporal Service configuration. These properties can be specified in both the `internode` and `frontend` sections of the [mTLS configuration](/references/configuration#tls). For details, see the [tls configuration reference](/references/configuration#tls). ### Server name specification On the self-hosted Temporal Service, you can specify `serverName` in the `client` section of your mTLS configuration to prevent spoofing and [MITM attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack). Entering a value for `serverName` enables established connections to authenticate the endpoint. This ensures that the server certificate presented to any connected client has the specified server name in its CN property. This measure can be used for `internode` and `frontend` endpoints. For more information on mTLS configuration, see [tls configuration reference](/references/configuration#tls). ### Authentication and authorization {/* commenting this very generic explanation out. Can include it back in if everyone feels strongly. **Authentication** is the process of verifying users who want to access your application are actually the users you want accessing it. **Authorization** is the verification of applications and data that a user on your Temporal Service or application has access to. */} Temporal provides authentication interfaces that can be set to restrict access to your data. These protocols address three areas: servers, client connections, and users. Temporal offers two plugin interfaces for authentication and authorization of API calls. - [`ClaimMapper`](/self-hosted-guide/security#claim-mapper) - [`Authorizer`](/self-hosted-guide/security#authorizer-plugin) The logic of both plugins can be customized to fit a variety of use cases. When plugins are provided, the Frontend Service invokes their implementation before running the requested operation. ## What is Temporal Service observability? {#monitoring-and-observation} You can monitor and observe performance with metrics emitted by your self-hosted Temporal Service or by Temporal Cloud. Temporal emits metrics by default in a format that is supported by Prometheus. Any metrics software that supports the same format can be used. Currently, we test with the following Prometheus and Grafana versions: - **Prometheus >= v2.0** - **Grafana >= v2.5** Temporal Cloud emits metrics through a Prometheus HTTP API endpoint, which can be directly used as a Prometheus data source in Grafana or to query and export Cloud metrics to any observability platform. For details on Cloud metrics and setup, see the following: - [Temporal Cloud metrics reference](/cloud/metrics/) - [Set up Grafana with Temporal Cloud observability to view metrics](/cloud/metrics/prometheus-grafana#grafana-data-sources-configuration) On the self-hosted Temporal Service, expose Prometheus endpoints in your Temporal Service configuration and configure Prometheus to scrape metrics from the endpoints. You can then set up your observability platform (such as Grafana) to use Prometheus as a data source. For details on self-hosted Temporal Service metrics and setup, see the following: - [Temporal Service OSS metrics reference](/references/cluster-metrics) - [Set up Prometheus and Grafana to view SDK and self-hosted Temporal Service metrics](/self-hosted-guide/monitoring) --- ## Temporal Service :::info Please note an important update in our terminology. We now refer to the Temporal Cluster as the Temporal Service. ::: This guide provides a comprehensive technical overview of a Temporal Service. A Temporal Service is the group of services, known as the [Temporal Server](/temporal-service/temporal-server), combined with [Persistence](/temporal-service/persistence) and [Visibility](/temporal-service/visibility) stores, that together act as a component of the Temporal Platform. See the Self-hosted Temporal Service [production deployment guide](/self-hosted-guide) for implementation guidance. --- ## Visibility This page discusses [Visibility](#visibility). ## What is Visibility? {#visibility} The term [Visibility](/visibility), within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view, filter, and search for Workflow Executions that currently exist within a Temporal Service. The [Visibility store](/self-hosted-guide/visibility) in your Temporal Service stores persisted Workflow Execution Event History data and is set up as a part of your [Persistence store](/temporal-service/persistence) to enable listing and filtering details about Workflow Executions that exist on your Temporal Service. - [How to set up a Visibility store](/self-hosted-guide/visibility) With Temporal Server v1.21, you can set up [Dual Visibility](/dual-visibility) to migrate your Visibility store from one database to another. Support for separate standard and advanced Visibility setups will be deprecated from Temporal Server v1.21 onwards. Check [Supported databases](/self-hosted-guide/visibility) for updates. --- ## What is Temporal? Temporal is a scalable and reliable runtime for durable function executions called [Temporal Workflow Executions](/workflow-execution). Said another way, it's a platform that guarantees the [Durable Execution](#durable-execution) of your application code. It enables you to develop as if failures don't even exist. Your application will run reliably even if it encounters problems, such as network outages or server crashes, which would be catastrophic for a typical application. The Temporal Platform handles these types of problems, allowing you to focus on the business logic, instead of writing application code to detect and recover from failures. ## Durable Execution {#durable-execution} Durable Execution in the context of Temporal refers to the ability of a Workflow Execution to maintain its state and progress even in the face of failures, crashes, or server outages. This is achieved through Temporal's use of an [Event History](/workflow-execution/event#event-history), which records the state of a Workflow Execution at each step. If a failure occurs, the Workflow Execution can resume from the last recorded event, ensuring that progress isn't lost. This durability is a key feature of Temporal Workflow Executions, making them reliable and resilient. It enables application code to execute effectively once and to completion, regardless of whether it takes seconds or years. ## What is the Temporal Platform? {#temporal-platform} The Temporal Platform consists of a [Temporal Service](/temporal-service) and [Worker Processes](/workers#worker-process). Together these components create a runtime for Workflow Executions. The Temporal Platform consists of a supervising software typically called the [Temporal Service](/temporal-service) and application code bundled as Worker Processes. Together these components create a runtime for your application. A Temporal Service consists of the [Temporal Server](https://github.com/temporalio/temporal), written in Go, and a database. Our software as a service (SaaS) offering, Temporal Cloud, offers an alternative to hosting the Temporal Service yourself. Worker Processes are hosted and operated by you and execute your code. Workers run using one of our SDKs. ## What is a Temporal Application? {#temporal-application} A Temporal Application is a set of [Temporal Workflow Executions](/workflow-execution). Each Temporal Workflow Execution has exclusive access to its local state, executes concurrently to all other Workflow Executions, and communicates with other Workflow Executions and the environment via message passing. A Temporal Application can consist of millions to billions of Workflow Executions. Workflow Executions are lightweight A Workflow Execution consumes few compute resources; in fact, if a Workflow Execution is suspended, such as when it is in a waiting state, the Workflow Execution consumes no compute resources at all. **Reentrant Process** A Temporal Workflow Execution is a Reentrant Process. A Reentrant Process is resumable, recoverable, and reactive. - Resumable: Ability of a process to continue execution after execution was suspended on an _awaitable_. - Recoverable: Ability of a process to continue execution after execution was suspended on a _failure_. - Reactive: Ability of a process to react to external events. Therefore, a Temporal Workflow Execution executes a [Temporal Workflow Definition](/workflow-definition), also called a Temporal Workflow Function, your application code, exactly once and to completion—whether your code executes for seconds or years, in the presence of arbitrary load and arbitrary failures. ## What is a Failure? {#failure} [Temporal Failures](/references/failures) are representations (in the SDKs and Event History) of various types of errors that occur in the system. Failure handling is an essential part of development. For more information, including the difference between application-level and platform-level failures, see [Handling Failure From First Principles](https://dominik-tornow.medium.com/handling-failures-from-first-principles-1ed976b1b869). For the practical application of those concepts in Temporal, see [Failure Handling in Practice](https://temporal.io/blog/failure-handling-in-practice). For languages that throw (or raise) errors (or exceptions), throwing an error that is not a Temporal Failure from a Workflow fails the Workflow Task (and the Task will be retried until it succeeds), whereas throwing a Temporal Failure (or letting a Temporal Failure propagate from Temporal calls, like an [Activity Failure](/references/failures#activity-failure) from an Activity call) fails the Workflow Execution. For more information, see [Application Failure](/references/failures#application-failure). --- ## Dual Visibility This page discusses [Dual Visibility](#dual-visibility). ## What is Dual Visibility? {#dual-visibility} Dual Visibility is a feature that lets you set a secondary Visibility store in addition to a primary store in your Temporal Service. Setting up Dual Visibility is optional and can be used to [migrate your Visibility database](/self-hosted-guide/visibility#migrating-visibility-database) or create a backup Visibility store. For example, if you have Cassandra configured as your Visibility database, you can set up a supported SQL database as your secondary Visibility store and gradually migrate your data to the secondary store before deprecating your primary one. A Dual Visibility setup requires two Visibility store configurations: - **Primary Visibility:** The primary Visibility store where Visibility data is written to and read from by default. The primary Visibility store is set with the `visibilityStore` configuration key in your Temporal Service. - **Secondary Visibility:** A secondary storage for your Visibility data. The secondary Visibility store is set with the `secondaryVisibilityStore` configuration key in your Temporal Service. For configuration details, see [Dual Visibility setup](/self-hosted-guide/visibility#dual-visibility). The following combinations are allowed in a Dual Visibility setting. | Primary | Secondary | | --------------------------- | ------------------------------- | | Standard (Cassandra or SQL) | Advanced (SQL or Elasticsearch) | | Advanced (SQL) | Advanced (SQL) | | Advanced (Elasticsearch) | Advanced (Elasticsearch) | With Dual Visibility, you can read from only one Visibility store at a time, but can configure your Temporal Service to write to primary only, secondary only, or to both primary and secondary Visibility stores. When migrating from one Visibility store database to another, set up the database you want to migrate to as your secondary Visibility store. You can plan your migration using specific dynamic configuration keys that help you transition your read and write operations from the primary to the secondary Visibility store. For details on migrating your Visibility store databases, see [Dual Visibility](/self-hosted-guide/visibility#dual-visibility). --- ## List Filter This page discusses [List Filter](#list-filter). ## What is a List Filter? {#list-filter} The [Visibility](/temporal-service/visibility) List API requires you to provide a List Filter as an SQL-like string parameter. A List Filter includes [Search Attribute](/search-attribute) names, Search Attribute values, and [operators](#supported-operators) so that it can retrieve a filtered list of Workflow Executions from the Visibility Store. List Filter [Search Attribute](/search-attribute) names are case sensitive. A single [Namespace](/namespaces) scopes each List Filter. A List Filter using a time range provides a resolution of 1 ns on [Elasticsearch](/self-hosted-guide/visibility#elasticsearch) and 1 µs for [SQL databases](/self-hosted-guide/visibility). ### Supported operators List Filters support the following operators: - **`=, !=, >, >=, <, <=`** - **`AND, OR, ()`** - **`BETWEEN ... AND`** - **`IN`** - **STARTS_WITH** :::note The **ORDER BY** operator is currently not supported in Temporal Cloud. The default ordering is: `ClosedTime DESC NULL FIRST`, `StartTime DESC`. {/* `RunID DESC` depends on which visibility store is used. */} Custom Search Attributes of the `Text` type cannot be used in **ORDER BY** clauses. ::: ### Partial string match There are different options for partial string matching when the type of the Search Attribute is [Text](#text) versus [Keyword](#keyword). #### Text Search Attributes of type `Text` are [broken up into words](https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-standard-tokenizer.html) that match with the `=` operator. For example, if you have a custom `Text` Search Attribute named `Description` with either of the following values— ``` my-business-id-foobar my business id foobar ``` —then the following List Filter matches— ``` Description = 'foobar' ``` —but a partial word does not: ``` // Doesn't match Description = 'foo' ``` #### Keyword For Search Attributes of type `Keyword` like `WorkflowId`, perform partial string matching using STARTS_WITH for prefixes and BETWEEN for suffixes. - `WorkflowId STARTS_WITH "order-"` matches Workflow Ids with the "order-" prefix, regardless of the following text. ``` order- order-1234 order-abracadabra order-~~~abracadabra ``` - `WorkflowId BETWEEN "order-" AND "order-~"` matches Workflow Ids that have characters after `order-` with ASCII values lower than `~` (126, the highest-value printable character), such as the following: ``` order- order-1234 order-abracadabra ``` It does not match `order-~~`. :::note Filter Composition Quick Reference **Composition** - Data types: - String literals with single or double quotes - Numbers (Integer and Floating Point) - Booleans - Comparison: `=`, `!=`, `>`, `>=`, `<`, `<=` - Expressions/Operators: - `IN array` - `BETWEEN value AND value` - `STARTS_WITH string` - `IS NULL`, `IS NOT NULL` - `expr AND expr`, `expr OR expr`, `( expr )` - Array: `( comma-separated-values )` **Please note** - Wrap attributes with backticks if it contains characters not in `[a-zA-Z0-9]`. - `STARTS_WITH` is only available for Keyword Search Attributes. ::: ### Efficient API usage If the Advanced List Filter API retrieves a substantial number of Workflow Executions (more than 10,000), the response time might be longer. Beginning with Temporal Server v1.20, you can employ the `CountWorkflow` API to efficiently count the number of [Workflow Executions](/workflow-execution). To paginate the results using the `ListWorkflow` API, use the page token to retrieve the next page. Continue until the page token becomes `null`/`nil`. #### List Filter examples Here are examples of List Filters set with the [Temporal CLI](/cli/workflow#list): ``` WorkflowType = "main.YourWorkflowDefinition" and ExecutionStatus != "Running" and (StartTime > "2021-06-07T16:46:34.236-08:00" or CloseTime > "2021-06-07T16:46:34-08:00") ``` When you use the preceding example, you receive a list of Workflows fulfilling the following criteria: - Workflow Type is `main.YourWorkflowDefinition`. - Workflow isn't in a running state. - Workflow either started after "2021-06-07T16:46:34.236-08:00" or closed after "2021-06-07T16:46:34-08:00". The following are additional examples of List Filters. ```sql WorkflowId = '' ``` ```sql WorkflowId = '' or WorkflowId = '' ``` ```sql WorkflowId IN ('', '') ``` ```sql WorkflowId = '' and ExecutionStatus = 'Running' ``` ```sql WorkflowId = '' or ExecutionStatus = 'Running' ``` ```sql WorkflowId = '' and StartTime > '2021-08-22T15:04:05+00:00' ``` ```sql ExecutionTime between '2021-08-22T15:04:05+00:00' and '2021-08-28T15:04:05+00:00' ``` ```sql ExecutionTime < '2021-08-28T15:04:05+00:00' or ExecutionTime > '2021-08-22T15:04:05+00:00' ``` ```sql WorkflowType STARTS_WITH '' ``` ### Search Attribute aliasing Temporal prefixes most [default Search Attributes](./search-attributes.mdx#default-search-attribute) with `Temporal` to avoid naming conflicts with custom Search Attributes. To make it easier to reference default Search Attributes in List Filters, Temporal supports aliasing, which lets you use the non-prefixed name of a default Search Attribute. However, if you choose to define a custom Search Attribute with the same name as the non-prefixed alias of a default Search Attribute, your custom Search Attribute overrides the alias. :::info Server Version Requirement Search Attribute aliasing requires Temporal Server version 1.30 and greater. ::: For example, the default Search Attribute `TemporalWorkflowVersioningBehavior` has the alias `WorkflowVersioningBehavior`. If you haven't defined a custom Search Attribute named `WorkflowVersioningBehavior`, you can use either name in a List Filter, and both refer to the same Search Attribute. ```sql -- Using the original attribute name WorkflowVersioningBehavior = 'pinned' -- Using the Temporal-prefixed alias (equivalent) TemporalWorkflowVersioningBehavior = 'pinned' ``` #### Alias resolution with custom Search Attributes When resolving a Search Attribute in a List Filter, Temporal Server checks for matches in the following order: 1. Custom Search Attributes defined in the current Namespace 2. Default Search Attributes This means that if you define a custom Search Attribute with the same name as the alias of a default Search Attribute, the non `Temporal` prefixed name will refer to your custom attribute. You can still search with the default Search Attribute by using the `Temporal` prefix. For example, if you have a custom Search Attribute named `SchedulePaused`, List Filters using the following Search Attributes will return different results: ```sql -- If you have a custom Search Attribute named 'SchedulePaused' -- This will use your custom attribute, not the default Search Attribute SchedulePaused = true -- The original system attribute still works by using the Temporal prefix TemporalSchedulePaused = true ``` `SchedulePaused` will refer to your custom Search Attribute, while `TemporalSchedulePaused` will refer to the default Search Attribute. --- ## Search Attributes This page discusses the following: - [Search Attributes](#search-attribute) - [Default Search Attributes](#default-search-attribute) - [Custom Search Attributes](#custom-search-attribute) ## What is a Search Attribute? {#search-attribute} A Search Attribute is an indexed field used in a [List Filter](/list-filter) to filter a list of [Workflow Executions](/workflow-execution) that have the Search Attribute in their metadata. Each Search Attribute is a key-value pair metadata object included in a Workflow Execution's Visibility information. This information is available in the Visibility store. :::note Search Attribute values are not encrypted because the Temporal Server must be able to read these values from the Visibility store when retrieving Workflow Execution details. ::: Temporal provides some [default Search Attributes](/search-attribute#default-search-attribute), such as `ExecutionStatus`, the current state of your Workflow Executions. You can also create [custom Search Attribute](/search-attribute#custom-search-attribute) keys in your Visibility store and assign values when starting a Workflow Execution or in Workflow code. When using [Continue-As-New](/workflow-execution/continue-as-new) or a [Temporal Cron Job](/cron-job), Search Attribute keys are carried over to the new Workflow Run by default. Search Attribute values are only available for as long as the Workflow is. Search Attributes are most effective for search purposes or tasks requiring collection-based result sets. For business logic in which you need to get information about a Workflow Execution, consider one of the following: - Storing state in a local variable and exposing it with a Query. - Storing state in an external datastore through Activities and fetching it directly from the store. If your business logic requires high throughput or low latency, store and fetch the data through Activities. You might experience lag due to time passing between the Workflow's state change and the Activity updating the Visibility store. ### Default Search Attributes {#default-search-attribute} A Temporal Service has a set of default Search Attributes already available. Default Search Attributes are set globally in any Namespace. These Search Attributes are created when the initial index is created. | NAME | TYPE | DEFINITION | | ---------------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | BatcherUser | Keyword | Used by internal batcher Workflow that runs in `TemporalBatcher` Namespace division to indicate the user who started the batch operation. | | BinaryChecksums | Keyword List | List of binary Ids of Workers that run the Workflow Execution. Deprecated since server version 1.21 in favor of the `BuildIds` search attribute. | | BuildIds | Keyword List | List of Worker Build Ids that have processed the Workflow Execution, formatted as `versioned:{BuildId}` or `unversioned:{BuildId}`, or the sentinel `unversioned` value. Available from server version 1.21. | | CloseTime | Datetime | The time at which the Workflow Execution completed. | | ExecutionDuration | Int | The time needed to run the Workflow Execution (in nanoseconds). Available only for closed Workflows. | | ExecutionStatus | Keyword | The current state of the Workflow Execution. | | ExecutionTime | Datetime | The time at which the Workflow Execution actually begins running; same as `StartTime` for most cases but different for Cron Workflows and retried Workflows. | | HistoryLength | Int | The number of events in the history of Workflow Execution. Available only for closed Workflows. | | HistorySizeBytes | Long | The size of the Event History. | | RunId | Keyword | Identifies the current Workflow Execution Run. | | StartTime | Datetime | The time at which the Workflow Execution started. | | StateTransitionCount | Int | The number of times that Workflow Execution has persisted its state. Available only for closed Workflows. | | TaskQueue | Keyword | Task Queue used by Workflow Execution. | | TemporalChangeVersion | Keyword List | Stores change/version pairs if the GetVersion API is enabled. | | TemporalReportedProblems | Keyword List | Stores information about Workflow task failures. Formatted as `category= cause=`. | TemporalScheduledStartTime | Datetime | The time that the Workflow is schedule to start according to the Schedule Spec. Can be manually triggered. Set on Schedules. | | TemporalScheduledById | Keyword | The Id of the Schedule that started the Workflow. | | TemporalSchedulePaused | Boolean | Indicates whether the Schedule has been paused. Set on Schedules. | | TemporalWorkerDeployment | Keyword | Indicates the name of the associated Worker Deployment. | | TemporalWorkerDeploymentVersion | Keyword | Indicates the Version string of the associated Worker Deployment, in the format `:`. | | TemporalWorkflowVersioningBehavior | Keyword | Indicates the associated Worker Versioning behavior ("Pinned", "Auto-Upgrade", or null if not using Worker Versioning). | | WorkflowId | Keyword | Identifies the Workflow Execution. | | WorkflowType | Keyword | The type of Workflow. | - All default Search Attributes are reserved and read-only. You cannot create a custom one with the same name or alter the existing one. - Search attributes are not encrypted in the system. Do not use PII as either the search attribute name or the value. - To use default Search Attributes with the `Temporal` prefix in a List Filter, you can use their non-prefixed alias. Refer to [Search Attribute aliasing](./list-filter.mdx#search-attribute-aliasing) for details. - ExecutionStatus values correspond to Workflow Execution statuses: Running, Completed, Failed, Canceled, Terminated, ContinuedAsNew, TimedOut. - StartTime, CloseTime, and ExecutionTime are stored as dates but are supported by queries that use either EpochTime in nanoseconds or a string in [RFC3339Nano format](https://pkg.go.dev/time#pkg-constants) (such as "2006-01-02T15:04:05.999999999Z07:00"). - ExecutionDuration is stored in nanoseconds but is supported by queries that use integers in nanoseconds, [Golang duration format](https://pkg.go.dev/time#ParseDuration), or "hh:mm:ss" format. - CloseTime, HistoryLength, StateTransitionCount, and ExecutionDuration are present only in a closed Workflow Execution. - ExecutionTime can differ from StartTime in retry and cron use cases. You can use the default Search Attributes in a List Filter, such as in the Temporal Web UI or with the `temporal workflow list` commands, under the following conditions: - Without advanced Visibility, you can only use the `=` operator with a single default Search Attribute in your List Filter. For example: `temporal workflow list --query "ExecutionStatus = 'Completed'"` or `temporal workflow list --query "WorkflowType = 'YourWorkflow'"`. - With advanced Visibility, you can combine default Search Attributes in a List Filter to get a list of specific Workflow Executions. For example: `temporal workflow list --query "WorkflowType = 'main.YourWorkflowDefinition' and ExecutionStatus != 'Running' and (StartTime > '2022-06-07T16:46:34.236-08:00' or CloseTime < '2022-06-08T16:46:34-08:00')"` ### Custom Search Attributes {#custom-search-attribute} You can [create custom Search Attributes](/self-hosted-guide/visibility#create-custom-search-attributes) with unique key names that are relevant to your business needs. Use custom Search Attributes in a List Filter, such as in the Temporal Web UI or with the `temporal workflow list` commands, under the following conditions: - Without advanced Visibility, you cannot use a custom Search Attribute in your List Filter. - With advanced Visibility, you can create multiple custom Search Attributes and use them in combinations with List Filters to get a list of specific Workflow Executions. For example: `temporal workflow list --query "WorkflowType = 'main.YourWorkflowDefinition' and YourCustomSA = 'YourCustomSAValue' and (StartTime > '2022-06-07T16:46:34.236-08:00' or CloseTime < '2022-06-08T16:46:34-08:00')"` - With Temporal Server v1.19 and earlier, you must [integrate Elasticsearch](/self-hosted-guide/visibility#elasticsearch) to use custom Search Attributes with List Filters. - With Temporal Server v1.20 and later, custom Search Attribute capabilities are available on MySQL (v8.0.17 or later), PostgreSQL (v12 and later), and SQLite (v3.31.0 and later), in addition to Elasticsearch. If you use Elasticsearch as your Visibility store, your custom Search Attributes apply globally and can be used across Namespaces. However, if using any of the [supported SQL databases](/self-hosted-guide/visibility) with Temporal Server v1.20 and later, your custom Search Attributes are associated with a specific Namespace and can be used for Workflow Executions in that Namespace. See [custom Search Attributes limits](/search-attribute#custom-search-attribute-limits) for limits on the number and size of custom Search Attributes you can create. #### Supported types {#supported-types} Custom Search Attributes must be one of the following types: - Bool - Datetime - Double - Int - Keyword - KeywordList - Text Note: - **Double** is backed up by `scaled_float` Elasticsearch type with scale factor 10000 (4 decimal digits). - **Datetime** is backed up by `date` type with milliseconds precision in Elasticsearch 6 and `date_nanos` type with nanoseconds precision in Elasticsearch 7. - **Int** is 64-bit integer (`long` Elasticsearch type). - **Keyword** and **Text** types are concepts taken from Elasticsearch. Each word in a **Text** is considered a searchable keyword. For a UUID, that can be problematic because Elasticsearch indexes each portion of the UUID separately. To have the whole string considered as a searchable keyword, use the **Keyword** type. For example, if the key `ProductId` has the value of `2dd29ab7-2dd8-4668-83e0-89cae261cfb1`: - As a **Keyword** it would be matched only by `ProductId = "2dd29ab7-2dd8-4668-83e0-89cae261cfb1`. - As a **Text** it would be matched by `ProductId = 2dd8`, which could cause unwanted matches. - With Temporal Server v1.19 and earlier, the **Keyword** type can store a list of values. - With Temporal Server v1.20 and later, the **Keyword** type supports only a single value. To store a list of values, use **KeywordList**. - The **Text** type cannot be used in the "Order By" clause. #### Custom Search Attributes limits {#custom-search-attribute-limits} {/* TODO - [How to configure maximum number of Search Attribute keys per Cluster](#) */} The following table lists the maximum number of custom Search Attributes you can create per Namespace by supported Visibility database. | Search Attribute type | MySQL (v8.0.17 and later) | PostgreSQL (v12 and later) | SQLite (v3.31.0 and later) | Temporal Cloud | | --------------------- | :-----------------------: | :------------------------: | :------------------------: | :------------: | | Bool | 3 | 3 | 3 | 20 | | Datetime | 3 | 3 | 3 | 20 | | Double | 3 | 3 | 3 | 20 | | Int | 3 | 3 | 3 | 20 | | Keyword | 10 | 10 | 10 | 40 | | KeywordList | 3 | 3 | 3 | 5 | | Text | 3 | 3 | 3 | 5 | Temporal does not impose a limit on the number of custom Search Attributes you can create with Elasticsearch. However, [Elasticsearch sets a default mapping limit](https://www.elastic.co/guide/en/elasticsearch/reference/8.6/mapping-settings-limit.html) that may apply. Custom Search Attributes are an advanced Visibility feature and are not supported on Cassandra. Size limits for a custom Search Attribute: {/* _This refers to the SA key you create in the visibility store with `tctl search-attribute create`. this value is no longer applicable so commenting out for ref later_ Default total maximum number of Search Attribute **keys** per Temporal Service is 100. */} - The default single Search Attribute **value** size limit is 2 KB. {/* TODO - [How to configure Search Attribute value size limit](#) */} - The maximum total Search Attribute size is 40 KB. {/* TODO - [How to configure total Search Attribute size limit](#) */} - The maximum total characters per Search Attribute value is 255. {/* temp keeping for reference This is configurable with [`SearchAttributesNumberOfKeysLimit`, `SearchAttributesTotalSizeLimit` and `SearchAttributesSizeOfValueLimit`](https://github.com/temporalio/temporal/blob/v1.7.0/service/history/configs/config.go#L440-L442), if you know what you are doing. */} For Temporal Cloud specific configurations, see the [Defaults, limits, and configurable settings -Temporal Cloud](/cloud/limits#number-of-custom-search-attributes) guide. ### Usage {#usage} Search Attributes available in your Visibility store can be used with Workflow Executions for the Temporal Service. To actually have results from the use of a [List Filter](/list-filter), Search Attributes must be added to a Workflow Execution as metadata. - To create custom Search Attributes in your Visibility store, see [Create custom Search Attributes](/self-hosted-guide/visibility#create-custom-search-attributes). - To remove a custom Search Attribute from the Visbility store, see [Remove custom Search Attributes](/self-hosted-guide/visibility#remove-custom-search-attributes). Removing custom Search Attributes is not supported on Temporal Cloud. - To rename a custom Search Attribute on Temporal Cloud, see [`tcld namespace search-attributes rename`](/cloud/tcld/namespace/#rename). With Workflows you can do the following: - Set the value of Search Attributes in your Workflow - Update the value set for a Search Attribute from within the Workflow code - Remove the value set for a Search Attribute from within the Workflow code :::info Manage Search Attributes by SDK - [How to manage Search Attributes using the Go SDK](/develop/go/observability#visibility) - [How to manage Search Attributes using the Java SDK](/develop/java/observability#visibility) - [How to manage Search Attributes using the PHP SDK](/develop/php/observability#visibility) - [How to manage Search Attributes using the Python SDK](/develop/python/observability#visibility) - [How to manage Search Attributes using the TypeScript SDK](/develop/typescript/observability#visibility) - [How to manage Search Attributes using the .NET SDK](/develop/dotnet/observability#search-attributes) ::: - To get a list of Search Attributes using the Temporal CLI, issue `temporal operator search-attribute list`. See [Search Attributes](/search-attribute). After you add and set your Search Attributes, use your default or custom Search Attributes in a List Filter. {/* commenting out this part. added this detail in how to create a custom search attribute under clusters. The [temporalio/auto-setup](https://hub.docker.com/r/temporalio/auto-setup) Docker image uses a pre-defined set of custom Search Attributes that are handy for testing. Their names indicate their types: - CustomBoolField - CustomDatetimeField - CustomDoubleField - CustomIntField - CustomKeywordField - CustomTextField */} --- ## Temporal Visibility This page provides an overview of Temporal Visibility. The term [Visibility](/visibility), within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view, filter, and search for Workflow Executions that currently exist within a Temporal Service. The [Visibility store](/self-hosted-guide/visibility) in your Temporal Service stores persisted Workflow Execution Event History data and is set up as a part of your [Persistence store](/temporal-service/persistence) to enable listing and filtering details about Workflow Executions that exist on your Temporal Service. - [How to set up a Visibility store](/self-hosted-guide/visibility) With Temporal Server v1.21, you can set up [Dual Visibility](/dual-visibility) to migrate your Visibility store from one database to another. Support for separate standard and advanced Visibility setups will be deprecated from Temporal Server v1.21 onwards. Check [Supported databases](/self-hosted-guide/visibility) for updates. ## What is standard Visibility? {#standard-visibility} Standard Visibility, within the Temporal Platform, is the subsystem and APIs that list Workflow Executions by a predefined set of filters. Open Workflow Executions can be filtered by a time constraint and either a Workflow Type, Workflow Id, or Run Id. Closed Workflow Executions can be filtered by a time constraint and either a Workflow Type, Workflow Id, Run Id, or Execution Status (Completed, Failed, Timed Out, Terminated, Canceled, or Continued-As-New). [Custom Search Attributes](https://docs.temporal.io/search-attribute#custom-search-attribute) are not supported with Standard Visibility. Support for standard Visibility is deprecated beginning with Temporal Server v1.21. For updates, check [Supported databases](/self-hosted-guide/visibility). ## What is advanced Visibility? {#advanced-visibility} Visibility, within the Temporal Platform, is the subsystem and APIs that enable the listing, filtering, and sorting of [Workflow Executions](/workflow-execution) through a custom SQL-like [List Filter](/list-filter). - In Temporal Service version 1.20 and later, advanced Visibility is available on SQL databases like MySQL (version 8.0.17 and later) and PostgreSQL (version 12 and later), in addition to support for Elasticsearch. - For Temporal Server versions 1.19.1 and earlier, you must [integrate with ElasticSearch](/self-hosted-guide/visibility#elasticsearch) to use advanced Visibility. Elasticsearch takes on the Visibility request load, relieving potential performance issues. We highly recommend operating a Temporal Service with Elasticsearch for any use case that spawns more than just a few Workflow Executions. - On Temporal Cloud, advanced Visibility is enabled by default for [all users](/cloud/users#invite-users). --- ## Worker Versioning (Legacy) :::tip Support, stability, and dependency info - This document refers to the 2023 draft of Worker Versioning, which was deprecated - It was not made available in Temporal Cloud - The 2024 draft was available in Cloud on an opt-in basis, and is documented in this [Pre-release README.md](https://github.com/temporalio/temporal/blob/main/docs/worker-versioning.md). For newer revisions of this feature set, please see [Worker Versioning](/production-deployment/worker-deployments/worker-versioning) instead. ::: Worker Versioning simplifies the process of deploying changes to [Workflow Definitions](/workflow-definition). It does this by letting you define sets of versions that are compatible with each other, and then assigning a Build ID to the code that defines a Worker. The Temporal Server uses the Build ID to determine which versions of a Workflow Definition a Worker can process. We recommend that you read about Workflow Definitions before proceeding, because Workflow Versioning is largely concerned with helping to manage nondeterministic changes to those definitions. Worker Versioning helps manage nondeterministic changes by providing a convenient way to ensure that [Workers](/workers) with different Workflow and Activity Definitions operating on the same Task Queue don't attempt to process [Workflow Tasks](/tasks#workflow-task) and [Activity Tasks](/tasks#activity-task-execution) that they can't successfully process, according to sets of versions associated with that Task Queue that you've defined. Accomplish this goal by assigning a Build ID (a free-form string) to the code that defines a Worker, and specifying which Build IDs are compatible with each other by updating the version sets associated with the Task Queue, stored by the Temporal Server. ### When and why you should use Worker Versioning :::caution This section is for a deprecated Worker Versioning API. Please redirect your attention to [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). ::: The main reason to use this feature is to deploy incompatible changes to short-lived [Workflows](/workflows). On Task Queues using this feature, the Workflow starter doesn't have to know about the introduction of new versions. The new code in the newly deployed Workers executes new [Workflow Executions](/workflow-execution), while only Workers with an appropriate version process old Workflow Executions. #### Decommission old Workers You can decommission old Workers after you archive all open Workflows using their version. If you have no need to query closed Workflows, you can decommission them when no open Workflows remain at that version. For example, if you have a Workflow that completes within a day, a good strategy is to assign a new Build ID to every new Worker build and add it as the new overall default in the version sets. Because your Workflow completes in a day, you know that you won't need to keep older Workers running for more than a day after you deploy the new version (assuming availability). You can apply this technique to longer-lived Workflows too; however, you might need to run multiple Worker versions simultaneously while open Workflows complete. Version sets have a maximum size limit, which defaults to 100 Build IDs across all sets. Operations to add new Build IDs to the sets will fail if they exceed this limit. There is also a limit on the number of Version Sets, which defaults to 10. A version can only be garbage collected after a Workflow Execution is deleted. #### Deploy code changes to Workers The feature also lets you implement compatible changes to or prevent a buggy code path from executing on currently open Workflows. You can achieve this by adding a new version to an existing set and defining it as _compatible_ with an existing version, which shouldn't execute any future Workflow Tasks. Because the new version processes existing [Event Histories](/workflow-execution/event#event-history), it must adhere to the usual [deterministic constraints](/workflow-definition#deterministic-constraints), and you might need to use one of the [versioning APIs](/workflow-definition#workflow-versioning). Moreover, this feature lets you make incompatible changes to Activity Definitions in conjunction with incompatible changes to Workflow Definitions that use those Activities. This functionality works because any Activity that a Workflow schedules on the same Task Queue gets dispatched by default only to Workers compatible with the Workflow that scheduled it. If you want to change an Activity Definition's type signature while creating a new incompatible Build ID for a Worker, you can do so without worrying about the Activity failing to execute on some other Worker with an incompatible definition. The same principle applies to Child Workflows. For both Activities and Child Workflows, you can override the default behavior and run the Activity or Child Workflow on latest default version. :::tip Public-facing Workflows on a versioned Task Queue shouldn't change their signatures because doing so contradicts the purpose of Workflow-launching Clients remaining unaware of changes in the Workflow Definition. If you need to change a Workflow's signature, use a different Workflow Type or a completely new Task Queue. ::: :::note If you schedule an Activity or a Child Workflow on _a different_ Task Queue from the one the Workflow runs on, the system doesn't assign a specific version. This means if the target queue is versioned, they run on the latest default, and if it's unversioned, they operate as they would have without this feature. ::: **Continue-As-New and Worker Versioning** By default, a versioned Task Queue's Continue-as-New function starts the continued Workflow on the same compatible set as the original Workflow. If you continue-as-new onto a different Task Queue, the system doesn't assign any particular version. You also have the option to specify that the continued Workflow should start using the Task Queue's latest default version. ### How to use Worker Versioning :::caution This section is for a deprecated Worker Versioning API. See [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). ::: To use Worker Versioning, follow these steps: 1. Define Worker build-identifier version sets for the Task Queue. You can use either the Temporal CLI or your choice of SDK. 2. Enable the feature on your Worker by specifying a Build ID. #### Defining the version sets Whether you use [Temporal CLI](/cli/) or an SDK, updating the version sets feels the same. You specify the Task Queue that you're targeting, the Build ID that you're adding (or promoting), whether it becomes the new default version, and any existing versions it should be considered compatible with. The rest of this section uses updates to one Task Queue's version sets as examples. By default, both Task Queues and Workers are in an unversioned state. [Unversioned Worker](#unversioned-workers) can poll unversioned Task Queues and receive tasks. To use this feature, both the Task Queue and the Worker must be associated with Build IDs. If you run a Worker using versioning against a Task Queue that has not been set up to use versioning (or is missing that Worker's Build ID), it won't get any tasks. Likewise, a unversioned Worker polling a Task Queue with versioning won't work either. :::note Versions don't need to follow semver or any other semantic versioning scheme! The versions in the following examples look like semver versions for clarity, but they don't need to be. Versions can be any arbitrary string. ::: First, add a version `1.0` to the Task Queue as the new default. Your version sets now look like this: | set 1 (default) | | --------------- | | 1.0 (default) | All new Workflows started on the Task Queue have their first tasks assigned to version `1.0`. Workers with their Build ID set to `1.0` receive these Tasks. If Workflows that don't have an assigned version are still running on the Task Queue, Workers without a version take those tasks. So ensure that such Workers are still operational if any Workflows were open when you added the first version. If you deployed any Workers with a _different_ version, those Workers receive no Tasks. Now, imagine you need change the Workflow for some reason. Add `2.0` to the sets as the new default: | set 1 | set 2 (default) | | ------------- | --------------- | | 1.0 (default) | 2.0 (default) | All new Workflows started on the Task Queue have their first tasks assigned to version `2.0`. Existing `1.0` Workflows keep generating tasks targeting `1.0`. Each deployment of Workers receives their respective Tasks. This same concept carries forward for each new incompatible version. Maybe you have a bug in `2.0`, and you want to make sure all open `2.0` Workflows switch to some new code as fast as possible. So, you add `2.1` to the sets, marking it as compatible with `2.0`. Now your sets look like this: | set 1 | set 2 (default) | | ------------- | --------------- | | 1.0 (default) | 2.0 | | | 2.1 (default) | All new Workflow Tasks that are generated for Workflows whose last Workflow Task completion was on version `2.0` are now assigned to version `2.1`. Because you specified that `2.1` is compatible with `2.0`, Temporal Server assumes that Workers with this version can process the existing Event Histories successfully. Continue with your normal development cycle, adding a `3.0` version. Nothing new here: | set 1 | set 2 | set 3 (default) | | ------------- | ------------- | --------------- | | 1.0 (default) | 2.0 | 3.0 (default) | | | 2.1 (default) | | Now imagine that version `3.0` doesn't have an explicit bug, but something about the business logic is less than ideal. You are okay with existing `3.0` Workflows running to completion, but you want new Workflows to use the old `2.x` branch. This operation is supported by performing an update targeting `2.1` (or `2.0`) and setting its set as the current default, which results in these sets: | set 1 | set 3 | set 2 (default) | | ------------- | ------------- | --------------- | | 1.0 (default) | 3.0 (default) | 2.0 | | | | 2.1 (default) | Now new Workflows start on `2.1`. #### Permitted and forbidden operations on version sets A request to change the sets can do one of the following: - Add a version to the sets as the new default version in a new overall-default compatible set. - Add a version to an existing set that's compatible with an existing version. - Optionally making it the default for that set. - Optionally making that set the overall-default set. - Promote a version within an existing set to become the default for that set. - Promote a set to become the overall-default set. You can't explicitly delete versions.This helps you avoid the situation in which Workflows accidentally become stuck with no means of making progress because the version they're associated with no longer exists. However, sometimes you might want to do this intentionally. If you _want_ to make sure that all Workflows currently being processed by, say, `2.0` stop (even if you don't yet have a new version ready), you can add a new version `2.1` to the sets marked as compatible with `2.0`. New tasks will target `2.1`, but because you haven't deployed any `2.1` Workers, they won't make any progress. #### Set constraints The sets have a maximum size limit, which defaults to 100 build IDs across all sets. This limit is configurable on Temporal Server via the `limit.versionBuildIdLimitPerQueue` dynamic config property. Operations to add new Build IDs to the sets fail if the limit would be exceeded. There is also a limit on the number of sets, which defaults to 10. This limit is configurable via the `limit.versionCompatibleSetLimitPerQueue` dynamic config property. In practice, these limits should rarely be a concern because a version is no longer needed after no open Workflows are using that version, and a background process will delete IDs and sets that are no longer needed. There is also a limit on the size of each Build ID or version string, which defaults to 255 characters. This limit is configurable on the server via the `limit.workerBuildIdSize` dynamic config property. ### Build ID reachability :::caution This section is for a deprecated Worker Versioning API. See [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). ::: Eventually, you'll want to know whether you can retire the old Worker versions. Temporal provides functionality to help you determine whether a version is still in use by open or closed Workflows. You can use the Temporal CLI to do this with the following command: ```command temporal task-queue get-build-id-reachability ``` The command determines, for each Task Queue, whether the Build ID in question is unreachable, only reachable by closed Workflows, or reachable by open and new Workflows. For example, this "2.0" Build ID is shown here by the Temporal CLI to be reachable by both new Workflows and some existing Workflows: ```command temporal task-queue get-build-id-reachability --build-id "2.0" ``` ```output BuildId TaskQueue Reachability 2.0 build-id-versioning-dc0068f6-0426-428f-b0b2-703a7e409a97 [NewWorkflows ExistingWorkflows] ``` For more information, see the [CLI documentation](/cli/) or help output. You can also use this API `GetWorkerTaskReachability` directly from within language SDKs. ### Unversioned Workers Unversioned Workers refer to Workers that have not opted into the Worker Versioning feature in their configuration. They receive tasks only from Task Queues that do not have any version sets defined on them, or that have open Workflows that began executing before versions were added to the queue. To migrate from an unversioned Task Queue, add a new default Build ID to the Task Queue. From there, deploy Workers with the same Build ID. Unversioned Workers will continue processing open Workflows, while Workers with the new Build ID will process new Workflow Executions. --- ## Sticky Execution This page discusses [Sticky Execution](#sticky-execution). ## What is a Sticky Execution? {#sticky-execution} Workers cache the state of the Workflow they execute. To make this caching more effective, Temporal employs a performance optimization known as "Sticky Execution", which directs Workflow Tasks to the same Worker that previously processed tasks for a specific Workflow Execution. ### How Sticky Execution Works Once Workflow Execution begins, the Temporal Service schedules a Workflow Task and puts it into a Task Queue with the name you specify. Any Worker that polls that Task Queue is eligible to accept the Task and begin executing the Workflow. The Worker that picks up this Workflow Task will continue polling the original Task Queue, but will also begin polling an additional Task Queue, which the Temporal Service shares exclusively with that specific Worker. This queue, which has an automatically-generated name, is known as a **Sticky Queue**. The Worker caches the Workflow state in memory, which improves performance by reducing the need to reconstruct the Workflow from its Event History for every Task. As the Workflow Execution progresses, the Temporal Service schedules additional Workflow Tasks into this Worker-specific Sticky Queue. If the Worker fails to start a Workflow Task in the Sticky Queue shortly after it's scheduled (within five seconds by default), the Temporal Service disables stickiness for that Workflow Execution. When stickiness is disabled, the Temporal Service reschedules the Workflow Task in the original queue, allowing any Worker to pick it up and continue the Workflow Execution. If a Workflow Task fails, the Worker removes that Workflow Execution from its cache (as it's now in an unknown state), which invalidates the Sticky Execution. The Workflow Task is then put back into the original Task Queue. ### Why Sticky Execution? The main benefit of Sticky Execution is improved performance. By caching the Workflow state in memory and directing tasks to the same Worker, it reduces the need to reconstruct the Workflow from its Event History for every Task, which is particularly useful for latency-sensitive Workflows. Sticky Execution is the default behavior of the Temporal Platform and only applies to Workflow Tasks. Since Event History is associated with a Workflow, the concept of Sticky Execution is not relevant to Activity Tasks. - [How to set a `StickyScheduleToStartTimeout` on a individual Worker in Go](/develop/go/core-application#stickyscheduletostarttimeout) Sticky Executions are the default behavior of the Temporal Platform. --- ## Task Queues and Naming Best Practices --- # Task Queue Names The Temporal Service maintains a set of Task Queues, which Workers poll to see what work needs to be done. Each Task Queue is identified by a name, which is provided to the Temporal Service when launching a Workflow Execution. **Excerpt of code used to start the Workflow in Python** ```python client = await Client.connect("localhost:7233", namespace="default") --- # Execute a workflow result = await client.execute_workflow( GreetingWorkflow.run, name, id="my-workflow", task_queue="my-task-queue-name", ) ``` **Excerpt of code used to configure the Worker in Python** ```python worker = Worker( client, task_queue="my-task-queue-name", workflows=[GreetingWorkflow], activities=[activities.say_hello], ) ``` **Excerpt of code used to start the Workflow in Go** ```go options := client.StartWorkflowOptions{ ID: "my-workflow", TaskQueue: "my-task-queue-name", } run, err := c.ExecuteWorkflow(ctx, options, ProcessOrderWorkflow, input) ``` **Excerpt of code used to configure the Worker in Go** ```go w := worker.New(c, "my-task-queue-name", worker.Options{}) ``` **Excerpt of code used to start the Workflow in Java** ```java WorkflowOptions options = WorkflowOptions.newBuilder() .setWorkflowId("my-workflow") .setTaskQueue("my-task-queue-name") .build(); MyWorkflow workflow = client.newWorkflowStub(MyWorkflow.class, options); ``` **Excerpt of code used to configure the Worker in Java** ```java Worker worker = factory.newWorker("my-task-queue-name"); ``` **Excerpt of code used to start the Workflow in TypeScript** ```typescript await client.workflow.start(OrderProcessingWorkflow, { args: [order], taskQueue: 'my-task-queue', workflowId: `workflow-order-${order.id},`, }); ``` **Excerpt of code used to configure the Worker in TypeScript** ```typescript const worker = await Worker.create({ taskQueue: 'my-task-queue', connection, workflowsPath: require.resolve('./workflows'), activities, }); ``` **Excerpt of code used to start the Workflow in C# and .NET** ```csharp var options = new WorkflowOptions( id: "translation-workflow", taskQueue: "my-task-queue"); // Run workflow var result = await client.ExecuteWorkflowAsync( (TranslationWorkflow wf) => wf.RunAsync(input), options); ``` **Excerpt of code used to configure the Worker in C# and .NET** ```csharp using var worker = new TemporalWorker( client, new TemporalWorkerOptions("my-task-queue") .AddAllActivities(activities) .AddWorkflow()); ``` Since Task Queues are created dynamically when they are first used, a mismatch between these two values does not result in an error. Instead, it will result in the creation of two different Task Queues. Consequently, the Worker will not receive any tasks from the Temporal Service and the Workflow Execution will not progress. Therefore, we recommend that you define the Task Queue name in a constant that is referenced by the Client and Worker if possible, as this will ensure that they always use the same value. **Excerpt of code used to define a constant with the Task Queue name in Python (in a shared.py file)** ```python TASK_QUEUE_NAME = "my-task-queue-name" ``` **Excerpt of code used to start the Workflow, referencing the constant defined with the Task Queue name in Python** ```python from shared import TASK_QUEUE_NAME ... client = await Client.connect("localhost:7233", namespace="default") --- # Execute a workflow result = await client.execute_workflow( GreetingWorkflow.run, name, id="my-workflow", task_queue=TASK_QUEUE_NAME, ) ``` **Excerpt of code used to configure the Worker, referencing the constant defined with the Task Queue name in Python** ```python worker = Worker( client, task_queue=TASK_QUEUE_NAME, workflows=[GreetingWorkflow], activities=[activities.say_hello], ) ``` **Excerpt of code used to define a constant with the Task Queue name in Go** ```go package app const TaskQueueName = "my-taskqueue-name" ``` **Excerpt of code used to start the Workflow, referencing the constant defined with the Task Queue name in Go** ```go options := client.StartWorkflowOptions{ ID: "my-workflow", TaskQueue: app.TaskQueueName, } run, err := c.ExecuteWorkflow(ctx, options, ProcessOrderWorkflow, input) ``` **Excerpt of code used to configure the Worker, referencing the constant defined with the Task Queue name in Go** ```go w := worker.New(c, app.TaskQueueName, worker.Options{}) ``` **Excerpt of code used to define a constant with the Task Queue name in Java** ```java package app; public class Constants { public static final String taskQueueName = "my-task-queue-name"; } ``` **Excerpt of code used to start the Workflow, referencing the constant defined with the Task Queue name in Java** ```java WorkflowOptions options = WorkflowOptions.newBuilder() .setWorkflowId("my-workflow") .setTaskQueue(Constants.taskQueueName) .build(); MyWorkflow workflow = client.newWorkflowStub(MyWorkflow.class, options); ``` **Excerpt of code used to configure the Worker, referencing the constant defined with the Task Queue name in Java** ```java Worker worker = factory.newWorker(Constants.taskQueueName); ``` **Excerpt of code used to define a constant with the Task Queue name in TypeScript** ```typescript const TASK_QUEUE_NAME = 'my-taskqueue-name'; ``` **Excerpt of code used to start the Workflow, referencing the constant defined with the Task Queue name in TypeScript** ```typescript // additional code would follow await client.workflow.start(OrderProcessingWorkflow, { args: [order], taskQueue: TASK_QUEUE_NAME, workflowId: `workflow-order-${order.id},`, }); ``` **Excerpt of code used to configure the Worker, referencing the constant defined with the Task Queue name in TypeScript** ```typescript // additional code would follow const worker = await Worker.create({ taskQueue: TASK_QUEUE_NAME, connection, workflowsPath: require.resolve('./workflows'), activities, }); ``` **Excerpt of code used to define a constant with the Task Queue name in C# and .NET** ```csharp public static class WorkflowConstants { public const string TaskQueueName = "translation-tasks"; } ``` **Excerpt of code used to start the Workflow, referencing the constant defined with the Task Queue name in C# and .NET** ```csharp var options = new WorkflowOptions( id: "translation-workflow", taskQueue: WorkflowConstants.TaskQueueName); // Run workflow var result = await client.ExecuteWorkflowAsync( (TranslationWorkflow wf) => wf.RunAsync(input), options); ``` **Excerpt of code used to configure the Worker, referencing the constant defined with the Task Queue name in C# and .NET** ```csharp using var worker = new TemporalWorker( client, new TemporalWorkerOptions(WorkflowConstants.TaskQueueName) .AddAllActivities(activities) .AddWorkflow()); ``` However, it’s not always possible to do define the Task Queue name in a constant, such as when the Client used to start the Workflow is running on another system or is implemented in a different programming language. --- ## Task Queues This page discusses [Task Queues](#task-queue) including [where to set Task Queues](#set-task-queue) and [Task Ordering](#task-ordering). ## What is a Task Queue? {#task-queue} A Task Queue is a lightweight, dynamically allocated queue that one or more [Worker Entities](/workers#worker-entity) poll for [Tasks](/tasks). There are three types of Task Queues: Activity Task Queues, Workflow Task Queues, and Nexus Task Queues. A Nexus Endpoint creates an entry point that separates callers from the underlying Nexus Task Queue. The Nexus callers only interact with the Nexus Endpoint. This endpoint routes Nexus Requests to a target Task Queue that's polled by a Nexus Worker. Task Queues are lightweight components that don’t require explicit registration. They’re created on demand when a Workflow Execution, Activity, or Nexus Operation is invoked, and/or when a Worker Process subscribes to start polling. When a named Task Queue is created, individual Task Queues for Workflows, Activities, and Nexus are created using the same name. A Temporal Application can use, and the Temporal Service can maintain, an unlimited number of Task Queues. Workers poll for Tasks in Task Queues via synchronous RPC. This implementation offers several benefits: - A Worker Process polls for a message only when it has spare capacity, avoiding overloading itself. - In effect, Task Queues enable load balancing across many Worker Processes. - Task Queues enable [Task Routing](/task-routing), which is the routing of specific Tasks to specific Worker Processes or even a specific process. - Activity Task Queues support server-side throttling, which enables you to limit the Task dispatching rate to the pool of Worker Processes while still supporting Task dispatching at higher rates when spikes happen. - Workflow and Activity Tasks persist in a Task Queue. When a Worker Process goes down, the messages remain until the Worker recovers and can process the Tasks. - Nexus and Query Tasks are not persisted. Instead, they are sync matched when, and only when, polled by a Worker. Sync matching immediately matches and delivers a Task to an available Worker without persisting a Task to the Service database. The caller is responsible to retry failed operations. Caller Workflows that invoke Nexus Operations will automatically retry Nexus Tasks until exceeding the Schedule-to-Close timeout. - Worker Processes do not need to advertise themselves through DNS or any other network discovery mechanism. - Worker Processes connect directly to the Temporal Service for secure communication without needing to open exposed ports. Any Worker can pick up any Task on a given Task Queue. You must ensure that if a Worker accepts a Task that it can process that task using one of its registered Workflows, Activities, or Nexus Operation handlers. This means that all Workers listening to a Task Queue must register all Workflows, Activities, and Nexus Operations that live on that Queue. There are two exceptions to this "Task Queue Workers with identical registrations" rule. First, Worker Versioning may be used. During Worker upgrade binary rollouts, it's okay to have temporarily misaligned registrations. Second, dynamic Workflows or Activity components may be used. If a Task arrives with a recognized method signature, the Worker can use a pre-registered dynamic stand-in. When Workers don't have a registered Workflow, Activity, Nexus Operation, or dynamic Workflow or Activity component for a given Task, the Task will fail with a "Not Found" error. - "Not Found" Workflow Tasks and Activity Tasks are treated as _retryable_ errors. - "Not Found" Nexus Operation handlers are _non-retryable_ and must be manually retried from the caller Workflow. #### Where to set Task Queues {#set-task-queue} There are five places where the name of the Task Queue can be set by the developer. 1. A Task Queue must be set when spawning a Workflow Execution: - [How to start a Workflow Execution using the Temporal CLI](/cli/workflow#start) - [How to start a Workflow Execution using the Go SDK](/develop/go/temporal-client#start-workflow-execution) - [How to start a Workflow Execution using the Java SDK](/develop/java/temporal-client#start-workflow-execution) - [How to start a Workflow Execution using the PHP SDK](/develop/php/temporal-client#start-workflow-execution) - [How to start a Workflow Execution using the Python SDK](/develop/python/temporal-client#start-workflow-execution) - [How to start a Workflow Execution using the TypeScript SDK](/develop/typescript/temporal-client#start-workflow-execution) - [How to start a Workflow Execution using the .NET SDK](/develop/dotnet/temporal-client#start-workflow) 2. A Task Queue name must be set when creating a Worker Entity and when running a Worker Process: - [How to run a development Worker using the Go SDK](/develop/go/core-application#develop-worker) - [How to run a development Worker using the Java SDK](/develop/java/core-application#run-a-dev-worker) - [How to run a development Worker using the PHP SDK](/develop/php/core-application#run-a-dev-worker) - [How to run a development Worker using the Python SDK](/develop/python/core-application#run-a-dev-worker) - [How to run a development Worker using the TypeScript SDK](/develop/typescript/core-application#run-a-dev-worker) - [How to run a development Worker using the .NET SDK](/develop/dotnet/core-application#run-worker-process) - [How to run a Temporal Cloud Worker using the Go SDK](/develop/go/core-application#run-a-temporal-cloud-worker) - [How to run a Temporal Cloud Worker using the TypeScript SDK](/develop/typescript/core-application#run-a-temporal-cloud-worker) Note that all Worker Entities listening to the same Task Queue name must be registered to handle the exact same Workflows Types, Activity Types, and Nexus Operations. If a Worker Entity polls a Task for a Workflow Type or Activity Type it does not know about, it will fail that Task. However, the failure of the Task will not cause the associated Workflow Execution to fail. 3. A Task Queue name can be provided when spawning an Activity Execution: This is optional. An Activity Execution inherits the Task Queue name from its Workflow Execution if one is not provided. - [How to start an Activity Execution using the Go SDK](/develop/go/core-application#activity-execution) - [How to start an Activity Execution using the Java SDK](/develop/java/core-application#activity-execution) - [How to start an Activity Execution using the PHP SDK](/develop/php/core-application#activity-execution) - [How to start an Activity Execution using the Python SDK](/develop/python/core-application#activity-execution) - [How to start an Activity Execution using the TypeScript SDK](/develop/typescript/core-application#activity-execution) - [How to start an Activity Execution using the .NET SDK](/develop/dotnet/core-application#activity-execution) 4. A Task Queue name can be provided when spawning a Child Workflow Execution: This is optional. A Child Workflow Execution inherits the Task Queue name from its Parent Workflow Execution if one is not provided. - [How to start a Child Workflow Execution using the Go SDK](/develop/go/child-workflows) - [How to start a Child Workflow Execution using the Java SDK](/develop/java/child-workflows) - [How to start a Child Workflow Execution using the PHP SDK](/develop/php/continue-as-new) - [How to start a Child Workflow Execution using the Python SDK](/develop/python/child-workflows) - [How to start a Child Workflow Execution using the TypeScript SDK](/develop/typescript/child-workflows) - [How to start a Child Workflow Execution using the .NET SDK](/develop/dotnet/child-workflows) 5. A Task Queue name can be provided when creating a Nexus Endpoint. Nexus Endpoints route requests to the target Task Queue. Nexus Workers poll the target Task Queue to handle the Nexus Tasks, such as starting or cancelling a Nexus Operation. - [How to run a Nexus Worker using the Go SDK](https://docs.temporal.io/develop/go/nexus#register-a-nexus-service-in-a-worker) - [How to run a Nexus Worker using the Java SDK](https://docs.temporal.io/develop/java/nexus#register-a-nexus-service-in-a-worker) #### Task ordering Task Queues can be scaled by adding partitions. By [default](/references/dynamic-configuration#service-level-rps-limits) each Task Queue has 4 partitions. Task Queues with a single partition are almost always first-in, first-out, with rare edge case exceptions. However, using a single partition limits you to low- and medium-throughput use cases. In Task Queues with multiple partitions, each task is assigned to a random partition. Generally partitions will act as FIFO queues, so once a task queue builds up a backlog, the sync match (tasks that can be dispatched immediately) rate will drop to nearly zero because the task queue will instead dispatch tasks from the backlog (i.e. async matches) first. :::note This section is on the ordering of individual Tasks, and does not apply to the ordering of Workflow Executions, Activity Executions, or [Events](/workflow-execution/event#event) in a single Workflow Execution. The order of Events in a Workflow Execution is guaranteed to remain constant once they have been written to that Workflow Execution's [History](/workflow-execution/event#event-history). ::: --- ## Task Routing and Worker Sessions This page discusses the following: - [Task Routing](#task-routing) - [Worker Sessions](#worker-session) ## What is Task Routing? {#task-routing} Task Routing is simply when a Task Queue is paired with one or more Workers, primarily for Activity Task Executions. This could also mean employing multiple Task Queues, each one paired with a Worker Process. Task Routing has many applicable use cases. Some SDKs provide a [Session API](#worker-session) that provides a straightforward way to ensure that Activity Tasks are executed with the same Worker without requiring you to manually specify Task Queue names. It also includes features like concurrent session limitations and worker failure detection. ### Flow control A Worker that consumes from a Task Queue asks for an Activity Task only when it has available capacity, so it is never overloaded by request spikes. If Activity Tasks get created faster than Workers can process them, they are backlogged in the Task Queue. ### Throttling The rate at which each Activity Worker polls for and processes Activity Tasks is configurable per Worker. Workers do not exceed this rate even if it has spare capacity. There is also support for global Task Queue rate limiting. This limit works across all Workers for the given Task Queue. It is frequently used to limit load on a downstream service that an Activity calls into. ### Specific environments In some cases, you might need to execute Activities in a dedicated environment. To send Activity Tasks to this environment, use a dedicated Task Queue. #### Route Activity Tasks to a specific host In some use cases, such as file processing or machine learning model training, an Activity Task must be routed to a specific Worker Process or Worker Entity. For example, suppose that you have a Workflow with the following three separate Activities: - Download a file. - Process the file in some way. - Upload a file to another location. The first Activity, to download the file, could occur on any Worker on any host. However, the second and third Activities must be executed by a Worker on the same host where the first Activity downloaded the file. In a real-life scenario, you might have many Worker Processes scaled over many hosts. You would need to develop your Temporal Application to route Tasks to specific Worker Processes when needed. Code samples: - [Go file processing example](https://github.com/temporalio/samples-go/tree/main/fileprocessing) - [Java file processing example](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/fileprocessing) - [PHP file processing example](https://github.com/temporalio/samples-php/tree/master/app/src/FileProcessing) #### Route Activity Tasks to a specific process Some Activities load large datasets and cache them in the process. The Activities that rely on those datasets should be routed to the same process. In this case, a unique Task Queue would exist for each Worker Process involved. #### Workers with different capabilities Some Workers might exist on GPU boxes versus non-GPU boxes. In this case, each type of box would have its own Task Queue and a Workflow can pick one to send Activity Tasks. ### Multiple priorities If your use case involves more than one priority, you can create one Task Queue per priority, with a Worker pool per priority. ### Versioning Task Routing is the simplest way to version your code. If you have a new backward-incompatible Activity Definition, start by using a different Task Queue. ## What is a Worker Session? {#worker-session} A Worker Session is a feature provided by some SDKs that provides a straightforward API for [Task Routing](#task-routing) to ensure that Activity Tasks are executed with the same Worker without requiring you to manually specify Task Queue names. It also includes features like concurrent session limitations and Worker failure detection. - [How to use Worker Sessions](/develop/go/sessions) --- ## Tasks This page discusses the following: - [Task](#task) - [Workflow Task](#workflow-task) - [Workflow Task Execution](#workflow-task-execution) - [Activity Task](#activity-task) - [Activity Task Execution](#activity-task-execution) - [Nexus Task](#nexus-task) - [Nexus Task Execution](#nexus-task-execution) ## What is a Task? {#task} A Task is the context that a Worker needs to progress with a specific [Workflow Execution](/workflow-execution), [Activity Execution](/activity-execution), or a [Nexus Task Execution](#nexus-task-execution). There are three types of Tasks: - [Workflow Task](#workflow-task) - [Activity Task](#activity-task) - [Nexus Task](#nexus-task) ## What is a Workflow Task? {#workflow-task} A Workflow Task is a Task that contains the context needed to make progress with a Workflow Execution. - Every time a new external event that might affect a Workflow state is recorded, a Workflow Task that contains the event is added to a Task Queue and then picked up by a Workflow Worker. - After the new event is handled, the Workflow Task is completed with a list of [Commands](/workflow-execution#command). - Handling of a Workflow Task is usually very fast and is not related to the duration of operations that the Workflow invokes. ### What is a Workflow Task Execution? {#workflow-task-execution} A Workflow Task Execution occurs when a [Worker](/workers#worker-entity) picks up a [Workflow Task](#workflow-task) and uses it to make progress on the execution of a [Workflow Definition](/workflow-definition) (also known as a Workflow function). ## What is an Activity Task? {#activity-task} An Activity Task contains the context needed to proceed with an [Activity Task Execution](#activity-task-execution). Activity Tasks largely represent the Activity Task Scheduled Event, which contains the data needed to execute an Activity Function. If Heartbeat data is being passed, an Activity Task will also contain the latest Heartbeat details. ### What is an Activity Task Execution? {#activity-task-execution} An Activity Task Execution occurs when a [Worker](/workers#worker-entity) uses the context provided from the [Activity Task](#activity-task) and executes the [Activity Definition](/activity-definition) (also known as the Activity Function). The [ActivityTaskScheduled Event](/references/events#activitytaskscheduled) corresponds to when the Temporal Service puts the Activity Task into the Task Queue. The [ActivityTaskStarted Event](/references/events#activitytaskstarted) corresponds to when the Worker picks up the Activity Task from the Task Queue. Either [ActivityTaskCompleted](/references/events#activitytaskcompleted) or one of the other Closed Activity Task Events corresponds to when the Worker has yielded back to the Temporal Service. The API to schedule an Activity Execution provides an "effectively once" experience, even though there may be several Activity Task Executions that take place to successfully complete an Activity. Once an Activity Task finishes execution, the Worker responds to the Temporal Service with a specific Event: - ActivityTaskCanceled - ActivityTaskCompleted - ActivityTaskFailed - ActivityTaskTerminated - ActivityTaskTimedOut ## What is a Nexus Task? {#nexus-task} A Nexus Task represents a single Nexus request to start or cancel a Nexus Operation. The Nexus Task includes details such as the Nexus Service and Nexus Operation names, and other information required to process the Nexus request. The Temporal Worker triggers the registered Operation handler based on the Nexus task information. ### What is a Nexus Task Execution? {#nexus-task-execution} A Nexus Task Execution occurs when a Worker uses the context provided from the Nexus Task and executes an action associated with a Nexus Operation which commonly includes starting a Nexus Operation using it's Nexus Operation handler plus many additional actions that may be performed on a Nexus Operation. The NexusOperationScheduled Event corresponds to when the Temporal Service records the Workflow's intent to schedule an operation. The NexusOperationStarted Event corresponds to when the Worker picks up the Nexus Task from the Task Queue, starts an asynchronous Nexus Operation, and returns an Operation token to the caller indicating the asynchronous Nexus Operation has started. Either NexusOperationCompleted or one of the other Closed Nexus Operation Events corresponds to when the Nexus Operation has reached a final state due to successfully completing the operation or unsuccessfully completing the operation in the case of a failure, timeout, or cancellation. A Nexus Operation Execution appears to the caller Workflow as a single RPC, while under the hood the Temporal Service may issue several Nexus Tasks to attempt to start the Operation. Hence, a Nexus Operation Handler implementation should be idempotent. The WorkflowRunOperation provided by the SDK leverages Workflow ID based deduplication to ensure idempotency and provide an "effectively once" experience. A Nexus Task Execution completes when a Worker responds to the Temporal Service with either a RespondNexusTaskCompleted or RespondNexusTaskFailed call, or when the Task times out. The Temporal Service interprets the outcome and determines whether to retry the Task or record the progress in a History Event: - NexusTaskCompleted - NexusTaskFailed --- ## Worker Shutdown Behavior When a Worker shuts down, it stops polling for new tasks and begins the shutdown sequence. In the case of in-flight Workflow Tasks, shutdown may cause them to fail if they aren’t completed in time, after exhausting Retry Policy attempts. There are two types of shutdown behavior that can occur, depending on whether an idea of “graceful shutdown” is configured. ## Graceful Shutdown Graceful shutdown configures how much time a Worker has to complete its current task before shutting down. An Activity is able to determine that the Worker it’s running on is being shut down, through the Activity context. > Core SDKs - `graceful_shutdown_period` > Go - `WorkerStopTimeout` > Java - `shutdown()` followed by `awaitTermination(timeout, unit)` ### Workflow tasks Any in-flight Workflow Tasks are (attempted to be) completed. The only reason they may not immediately, is if Workflow code is (incorrectly) blocking, or because of Local Activities (see below). ### Activities Activities are allowed to complete during the graceful shutdown period. ### Local Activities Because Local Activities run within a Workflow Task, current and future Local Activities within the same Workflow Task will be allowed to run and complete, assuming there is no additional command to yield to. If the Local Activity is unable to complete by the graceful shutdown period, the Local Activity attempt is sent a cancel signal. In this case, no new Local Activities will be retried or started, and the Worker is shut down. The Worker still waits for the current Workflow Task to complete, meaning you can eventually hit your Workflow Task or execution timeout, unless another Worker is spun up. ## Non-Graceful Period Shutdown This behavior is for either no graceful period being specified, or if the shutdown has taken longer than the configured graceful period. In all cases, the Activity context is canceled and the Worker will finish shutdown when the current Workflow Task completes (with either success or failure). :::note Go and Core SDKs behave differently when we pass task timeout and the Activity or Local Activity is still running: **Go** - The shutdown completes, but the Activity will continue to run and use a slot. **Core** - The Worker shutdown will not complete while the Activity completes. ::: ### Local Activities The Local Activity is sent a cancel signal, then the Workflow Task heartbeats stop, and no new Local Activities will be retried or started. The Worker still waits for the current Workflow Task to complete, meaning you can eventually hit your Workflow Task or execution timeout, unless another Worker is spun up. ## General Developer Guidance - Ensure Activities and Local Activities **honor context cancellation** or other shutdown signals. - Expect that **long or hung Local Activities may block shutdown** unless you fail early. It is recommended that Local Activities should already generally be used for short Activities. --- ## Worker Versioning This page defines some of the underlying concepts used in [Worker Versioning](/production-deployment/worker-deployments/worker-versioning): - [Worker Deployments](#deployments) - [Worker Deployment Versions](#deployment-versions) - [Versioning Behaviors](#versioning-behaviors) - [Versioning Definitions](#versioning-definitions) - [Versioning Statuses](#versioning-statuses) - [Continue-as-new, Child Workflow, and Retry Semantics](#inheritance-semantics) ## Worker Deployments {#deployments} A Worker Deployment is a logical service that groups similar Workers together for unified management. Each Deployment has a name (such as your service name) and supports versioning through a series of Worker Deployment Versions. ## Worker Deployment Versions {#deployment-versions} A Worker Deployment Version represents an iteration of a Worker Deployment. Each Deployment Version consists of Workers that share the same code build and environment. When a Worker starts polling for Workflow and Activity Tasks, it reports its Deployment Version to the Temporal Server. ## Versioning Behaviors {#versioning-behaviors} You can declare each Workflow type to have a **Versioning Behavior**, either Pinned or Auto-Upgrade, in your Workflow configuration using an SDK or the CLI. To learn more about implementing Worker Versioning, see our [Worker Versioning in production](production-deployment/worker-deployments/worker-versioning) page. ### Pinned Workflows {#pinned} A **Pinned** Workflow is guaranteed to complete on a single Worker Deployment Version. You can mark a Workflow Type as pinned when you register it by adding an additional Pinned parameter. If you need to move a pinned Workflow to a new version, use [`temporal workflow update-options`](/cli/workflow#update-options). ### Auto-Upgrade Workflows {#auto-upgrade} An **Auto-Upgrade** Workflow will move to the latest Worker Deployment Version automatically whenever you change the current version. Auto-upgrade Workflows are not restricted to a single Deployment Version and need to be kept replay-safe manually, i.e. with [patching](/workflow-definition#workflow-versioning). ### Actvity behavior across versions There are a few scenarios to consider for your Activities when you're handling your Worker Deployment versions. - Activities generally start on the Worker Deployment Version of their Workflow which means: - For Pinned Workflows, an Activity starts on the pinned version. - For Auto-Upgrade Workflows, an Activity starts on the Target Worker Deployment Version of the Workflow. In this case, Workflow Execution moves to its Target Version immediately before starting the Activity if the Target Version is different from the last used Version. The Target Worker Deployment Version of a Workflow is the Current or Ramping Version of the Workflow's Task Queue, depending on the Ramp Percentage and Workflow ID. There is an exception where you will have **Independent Activities**. Independent Activities are specific to Worker Versioning. They start on the Current or Ramping Version of their own Task Queue independently from their Workflow. - For a Pinned Workflow, Independent Activities are Activities that start on a Task Queue that's not a member of the calling Workflow's Pinned Worker Deployment Version. - For an Auto-Upgrade Workflow, Independent Activities are Activities that start on a Task Queue that's not a member of the calling Workflow's Target Worker Deployment Version. Since Independent Activities aren't part of a Workflow's version, they can run in a few different ways: - The Activity Task Queue is running in a separate Worker Deployment that only has the Independent Activity. - The Independent Activity is in an unversioned Task Queue. - The Independent Activity is in a separate Worker Deployment that has its own Workflows, but other Workflows reuse the Activity from other Worker Deployments. ## Versioning Definitions - **Current Worker Deployment Version**: The version where Workflows are routed to unless they were previously pinned on a different version. Other versions can continue polling to allow pinned Workflows to finish executing or in case you need to roll back. If no current version is specified, the default is unversioned. - **Ramping Worker Deployment Version**: The version where a configurable percentage of Workflows are routed to unless they were previously pinned on a different version. The ramp percentage can be in the range [0, 100]. Workflows that don't go to the Ramping Version will go to the Current Version. If no Ramping Version is specified, 100% of new Workflows and Auto-Upgrade Workflows will go to the Current Version. - **Target Worker Deployment Version**: The version your Workflow will move to next. This could be the Deployment's Current Version or the Ramping Version. For example, if an Auto-Upgrade Workflow was running on Version A, the Current Version is B, and there is a 5% ramp to C, there is a 95% chance that its Target Version is B and 5% that it's C. ## Versioning Statuses {#versioning-statuses} A Worker Deployment Version moves through the following states: 1. **Inactive**: The version exists because a Worker with that version has polled the server. If this version never becomes Active, it will never be Draining or Drained. 2. **Active**: The version is either Current or Ramping, so it is accepting new Workflows and existing auto-upgrade Workflows. 3. **Draining**: The version has open pinned Workflows running on it, but stopped being Current or Ramping, usually because a newer version has been deployed. It is possible to be Draining and have no open pinned Workflows for a short time, since the drainage status is updated only periodically. 4. **Drained**: The version was draining and now all the pinned Workflows that were running on it are closed. Closed Workflows may still re-run some code paths if they are [Queried](https://docs.temporal.io/sending-messages#sending-queries) within their [Retention Period](https://docs.temporal.io/temporal-service/temporal-server#retention-period) and Workers with that version are still polling. ## Continue-as-new, Child Workflow, and Retry Semantics {#inheritance-semantics} When Workflows start new runs (e.g. by continuing-as-new or retrying) the new run may inherit their versioning behavior. This section explains how inheritance works across different Workflow execution patterns. ### Ways Workflows Start New Runs A Workflow can start a new run through: - Starting a [Child Workflow](https://docs.temporal.io/child-workflows) - Invoking [Continue-As-New](https://docs.temporal.io/workflow-execution/continue-as-new) - Retrying per its [Retry Policy](https://docs.temporal.io/encyclopedia/retry-policies) - Starting another iteration of a [Cron Job](https://docs.temporal.io/cron-job) (superseded by [Schedules](https://docs.temporal.io/schedule)) ### Inheritance Rules Overview Auto-upgrade Workflows never inherit versions. By default, Pinned workflows will pass their version to any Pinned children. This section provides more detail on specific inheritance scenarios. ### Inheritance by Scenario #### Child Workflows **When Parent is Pinned:** - Child inherits the parent's version if the child's Task Queue belongs to that version - Child's first Workflow task executes in the same version as its parent - If child is also Pinned: child remains Pinned to the inherited version for its lifetime - If child is Auto-Upgrade: child's behavior changes to Auto-Upgrade after the first task completes - If child's Task Queue is not in the same Worker Deployment as parent: no inheritance occurs, child starts on Current Version of its task queue **When Parent is Auto-upgrade:** - Child inherits no initial Versioning Behavior - Child starts on the Current Version of its Worker Deployment like all new Workflow executions #### Continue-As-New **When Original Workflow is Pinned:** - The Pinned version is inherited across the Continue-As-New chain - If the new run's Task Queue is not in the same Worker Deployment as the original Workflow: no inheritance occurs, new run starts on Current Version of its task queue **When Original Workflow is Auto-upgrade:** - No version inheritance occurs #### Retries **Inheritance Conditions (all must be met):** - The retried run is effectively pinned at the time of retry - The retried run inherited a pinned version when it started (i.e., it is a child of a pinned parent, or a Continue-As-New of a pinned run) - The retried run is running on a Task Queue in the inherited version **When Conditions Not Met:** - No version inheritance occurs #### Cron Jobs - **Never inherit** versioning behavior or version ### Versioning Override Inheritance - Children, crons, retries, and continue-as-new inherit the source run's override **if**: - The override is pinned, **AND** - The new Workflow's Task Queue belongs to the override version - Override inheritance is evaluated separately and takes precedence over inherited base version --- ## What is a Temporal Worker? This page discusses the following: - [Worker](#worker) - [Worker Program](#worker-program) - [Worker Entity](#worker-entity) - [Worker Identity](#worker-identity) - [Worker Process](#worker-process) ## What is a Worker? {#worker} In day-to-day conversations, the term Worker is used to denote either a [Worker Program](#worker-program), a [Worker Process](#worker-process), or a [Worker Entity](/workers#worker-entity). Temporal documentation aims to be explicit and differentiate between them. ## What is a Worker Program? {#worker-program} A Worker Program is the static code that defines the constraints of the Worker Process, developed using the APIs of a Temporal SDK. :::info - [How to run a development Worker using the Go SDK](/develop/go/core-application#develop-worker) - [How to run a development Worker using the Java SDK](/develop/java/core-application#run-a-dev-worker) - [How to run a development Worker using the PHP SDK](/develop/php/core-application#run-a-dev-worker) - [How to run a development Worker using the Python SDK](/develop/python/core-application#run-a-dev-worker) - [How to run a development Worker using the TypeScript SDK](/develop/typescript/core-application#run-a-dev-worker) - [How to run a development Worker using the .NET SDK](/develop/dotnet/core-application#run-worker-process) - [How to run a Temporal Cloud Worker using the Go SDK](/develop/go/core-application#run-a-temporal-cloud-worker) - [How to run a Temporal Cloud Worker using the TypeScript SDK](/develop/typescript/core-application#run-a-temporal-cloud-worker) ::: ## What is a Worker Entity? {#worker-entity} A Worker Entity is the individual Worker within a Worker Process that listens to a specific Task Queue. A Worker Entity listens and polls on a single Task Queue. A Worker Entity contains a Workflow Worker and/or an Activity Worker, which makes progress on Workflow Executions and Activity Executions, respectively. **Can a Worker handle more Workflow Executions than its cache size or number of supported threads?** Yes it can. However, the trade off is added latency. Workers are stateless, so any Workflow Execution in a blocked state can be safely removed from a Worker. Later on, it can be resurrected on the same or different Worker when the need arises (in the form of an external event). Therefore, a single Worker can handle millions of open Workflow Executions, assuming it can handle the update rate and that a slightly higher latency is not a concern. **Operation guides:** - [How to tune Workers](/develop/worker-performance) ## What is a Worker Identity? {#worker-identity} Workers have an associated identifier that helps identify the specific Worker instance. By default, Temporal SDKs set a Worker Identity to `${process.pid}@${os.hostname()}`, which combines the Worker's process ID (`process.pid`) and the hostname of the machine running the Worker (`os.hostname()`). The Worker Identity is visible in various contexts, such as Workflow History and the list of pollers on a Task Queue. You can use the Worker Identity to aid in debugging operational issues. By providing a user assigned identifier, you can trace issues back to specific Worker instances. **What are some limitations of the default identity?** While the default identity format may seem sensible, it often proves to be of limited usefulness in cloud environments. Some common issues include: - **Docker containers**: When running Workers inside Docker containers, the process ID is always `1`, as each container typically runs a single process. This makes the process identifier meaningless for identification purposes. - **Random hostnames**: In some cloud environments, such as Amazon ECS (Elastic Container Service), the hostname is a randomly generated string that does not provide any meaningful information about the Worker's execution context. - **Ephemeral IP addresses**: In certain cases, the hostname might be set to an ephemeral IP address, which can change over time and does not uniquely identify a Worker instance. **What are some recommended approaches?** It is recommended that you ensure that the Worker Identity can be linked back to the corresponding machine, process, execution context, or log stream. In some execution environments, this might require that you explicitly specify the Worker Identity. Here are some approaches: - **Use environment-specific identifiers**: Choose an identifier that is specific to your execution environment. For example, when running Workers on Amazon ECS, you can set the Worker Identity to the ECS Task ID, which uniquely identifies the task running the Worker. - **Include relevant context**: Incorporate information that helps establish the context of the Worker, such as the deployment environment (`staging` or `production`), region, or any other relevant details. - **Ensure uniqueness**: Make sure that the Worker Identity is unique within your system to avoid ambiguity when debugging issues. - **Keep it concise**: While including relevant information is important, try to keep the Worker Identity concise and easily readable to facilitate quick identification and troubleshooting. ## What is a Worker Process? {#worker-process} A Worker Process is responsible for polling a [Task Queue](/task-queue), dequeueing a [Task](/tasks#task), executing your code in response to a Task, and responding to the [Temporal Service](/temporal-service) with the results. More formally, a Worker Process is any process that implements the Task Queue Protocol and the Task Execution Protocol. - A Worker Process is a Workflow Worker Process if the process implements the Workflow Task Queue Protocol and executes the Workflow Task Execution Protocol to make progress on a Workflow Execution. A Workflow Worker Process can listen on an arbitrary number of Workflow Task Queues and can execute an arbitrary number of Workflow Tasks. - A Worker Process is an Activity Worker Process if the process implements the Activity Task Queue Protocol and executes the Activity Task Processing Protocol to make progress on an Activity Execution. An Activity Worker Process can listen on an arbitrary number of Activity Task Queues and can execute an arbitrary number of Activity Tasks. **Worker Processes are external to a Temporal Service.** Temporal Application developers are responsible for developing [Worker Programs](#worker-program) and operating Worker Processes. Said another way, the [Temporal Service](/temporal-service) (including the Temporal Cloud) doesn't execute any of your code (Workflow and Activity Definitions) on Temporal Service machines. The Temporal Service is solely responsible for orchestrating [State Transitions](/workflow-execution#state-transition) and providing Tasks to the next available [Worker Entity](/workers#worker-entity). While data transferred in Event Histories is [secured by mTLS](/self-hosted-guide/security#encryption-in-transit-with-mtls), by default, it is still readable at rest in the Temporal Service. To solve this, Temporal SDKs offer a [Data Converter API](/dataconversion) that you can use to customize the serialization of data going out of and coming back in to a Worker Entity, with the net effect of guaranteeing that the Temporal Service cannot read sensitive business data. In many of our tutorials, we show you how to run both a Temporal Service and one Worker on the same machine for local development. However, a production-grade Temporal Application typically has a _fleet_ of Worker Processes, all running on hosts external to the Temporal Service. A Temporal Application can have as many Worker Processes as needed. A Worker Process can be both a Workflow Worker Process and an Activity Worker Process. Many SDKs support the ability to have multiple Worker Entities in a single Worker Process. (Worker Entity creation and management differ between SDKs.) A single Worker Entity can listen to only a single Task Queue. But if a Worker Process has multiple Worker Entities, the Worker Process could be listening to multiple Task Queues. Worker Processes executing Activity Tasks must have access to any resources needed to execute the actions that are defined in Activity Definitions, such as the following: - Network access for external API calls. - Credentials for infrastructure provisioning. - Specialized GPUs for machine learning utilities. The Temporal Service itself has [internal workers](https://temporal.io/blog/workflow-engine-principles/#system-workflows-1910) for system Workflow Executions. However, these internal workers are not visible to the developer. --- ## Temporal Cron Job This page discusses [Cron Job](#temporal-cron-job) including [Cron Schedules](#cron-schedules), [Time Zones](#cron-job-time-zones), and [how to stop a Cron Schedule](#stop-cron-schedules). ## What is a Temporal Cron Job? {#temporal-cron-job} :::note We recommend using [Schedules](/schedule) instead of Cron Jobs. Schedules were built to provide a better developer experience, including more configuration options and the ability to update or pause running Schedules. ::: A Temporal Cron Job is the series of Workflow Executions that occur when a Cron Schedule is provided in the call to spawn a Workflow Execution. - [How to set a Cron Schedule using the Go SDK](/develop/go/schedules#temporal-cron-jobs) - [How to set a Cron Schedule using the Java SDK](/develop/java/schedules#cron-schedule) - [How to set a Cron Schedule using the PHP SDK](/develop/php/schedules#temporal-cron-jobs) - [How to set a Cron Schedule using the Python SDK](/develop/python/schedules#temporal-cron-jobs) - [How to set a Cron Schedule using the TypeScript SDK](/develop/typescript/schedules#temporal-cron-jobs) A Temporal Cron Job is similar to a classic unix cron job. Just as a unix cron job accepts a command and a schedule on which to execute that command, a Cron Schedule can be provided with the call to spawn a Workflow Execution. If a Cron Schedule is provided, the Temporal Server will spawn an execution for the associated Workflow Type per the schedule. Each Workflow Execution within the series is considered a Run. - Each Run receives the same input parameters as the initial Run. - Each Run inherits the same Workflow Options as the initial Run. The Temporal Server spawns the first Workflow Execution in the chain of Runs immediately. However, it calculates and applies a backoff (`firstWorkflowTaskBackoff`) so that the first Workflow Task of the Workflow Execution does not get placed into a Task Queue until the scheduled time. After each Run Completes, Fails, or reaches the [Workflow Run Timeout](/encyclopedia/detecting-workflow-failures#workflow-run-timeout), the same thing happens: the next run will be created immediately with a new `firstWorkflowTaskBackoff` that is calculated based on the current Server time and the defined Cron Schedule. The Temporal Server spawns the next Run only after the current Run has Completed, Failed, or has reached the Workflow Run Timeout. This means that, if a Retry Policy has also been provided, and a Run Fails or reaches the Workflow Run Timeout, the Run will first be retried per the Retry Policy until the Run Completes or the Retry Policy has been exhausted. If the next Run, per the Cron Schedule, is due to spawn while the current Run is still Open (including retries), the Server automatically starts the new Run after the current Run completes successfully. The start time for this new Run and the Cron definitions are used to calculate the `firstWorkflowTaskBackoff` that is applied to the new Run. A [Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout) is used to limit how long a Workflow can be executing (have an Open status), including retries and any usage of Continue As New. The Cron Schedule runs until the Workflow Execution Timeout is reached or you terminate the Workflow. ## Cron Schedules {#cron-schedules} Cron Schedules are interpreted in UTC time by default. The Cron Schedule is provided as a string and must follow one of two specifications: **Classic specification** This is what the "classic" specification looks like: ``` ┌───────────── minute (0 - 59) │ ┌───────────── hour (0 - 23) │ │ ┌───────────── day of the month (1 - 31) │ │ │ ┌───────────── month (1 - 12) │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday) │ │ │ │ │ │ │ │ │ │ * * * * * ``` For example, `15 8 * * *` causes a Workflow Execution to spawn daily at 8:15 AM UTC. Use the [crontab guru site](https://crontab.guru/) to test your cron expressions. ### `robfig` predefined schedules and intervals You can also pass any of the [predefined schedules](https://pkg.go.dev/github.com/robfig/cron/v3#hdr-Predefined_schedules) or [intervals](https://pkg.go.dev/github.com/robfig/cron/v3#hdr-Intervals) described in the [`robfig/cron` documentation](https://pkg.go.dev/github.com/robfig/cron/v3). ``` | Schedules | Description | Equivalent To | | ---------------------- | ------------------------------------------ | ------------- | | @yearly (or @annually) | Run once a year, midnight, Jan. 1st | 0 0 1 1 * | | @monthly | Run once a month, midnight, first of month | 0 0 1 * * | | @weekly | Run once a week, midnight between Sat/Sun | 0 0 * * 0 | | @daily (or @midnight) | Run once a day, midnight | 0 0 * * * | | @hourly | Run once an hour, beginning of hour | 0 * * * * | ``` For example, "@weekly" causes a Workflow Execution to spawn once a week at midnight between Saturday and Sunday. Intervals just take a string that can be accepted by [time.ParseDuration](http://golang.org/pkg/time/#ParseDuration). ``` @every ``` ## Time zones {#cron-job-time-zones} _This feature only applies in Temporal 1.15 and up_ You can change the time zone that a Cron Schedule is interpreted in by prefixing the specification with `CRON_TZ=America/New_York` (or your [desired time zone from tz](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)). `CRON_TZ=America/New_York 15 8 * * *` therefore spawns a Workflow Execution every day at 8:15 AM New York time, subject to caveats listed below. Consider that using time zones in production introduces a surprising amount of complexity and failure modes! **If at all possible, we recommend specifying Cron Schedules in UTC (the default)**. If you need to use time zones, here are a few edge cases to keep in mind: - **Beware Daylight Saving Time:** If a Temporal Cron Job is scheduled around the time when daylight saving time (DST) begins or ends (for example, `30 2 * * *`), **it might run zero, one, or two times in a day**! The Cron library that we use does not do any special handling of DST transitions. Avoid schedules that include times that fall within DST transition periods. - For example, in the US, DST begins at 2 AM. When you "fall back," the clock goes `1:59 … 1:00 … 1:01 … 1:59 … 2:00 … 2:01 AM` and any Cron jobs that fall in that 1 AM hour are fired again. The inverse happens when clocks "spring forward" for DST, and Cron jobs that fall in the 2 AM hour are skipped. - In other time zones like Chile and Iran, DST "spring forward" is at midnight. 11:59 PM is followed by 1 AM, which means `00:00:00` never happens. - **Self Hosting note:** If you manage your own Temporal Service, you are responsible for ensuring that it has access to current `tzdata` files. The official Docker images are built with [tzdata](https://docs.w3cub.com/go/time/tzdata/index) installed (provided by Alpine Linux), but ultimately you should be aware of how tzdata is deployed and updated in your infrastructure. - **Updating Temporal:** If you use the official Docker images, note that an upgrade of the Temporal Service may include an update to the tzdata files, which may change the meaning of your Cron Schedule. You should be aware of upcoming changes to the definitions of the time zones you use, particularly around daylight saving time start/end dates. - **Absolute Time Fixed at Start:** The absolute start time of the next Run is computed and stored in the database when the previous Run completes, and is not recomputed. This means that if you have a Cron Schedule that runs very infrequently, and the definition of the time zone changes between one Run and the next, the Run might happen at the wrong time. For example, `CRON_TZ=America/Los_Angeles 0 12 11 11 *` means "noon in Los Angeles on November 11" (normally not in DST). If at some point the government makes any changes (for example, move the end of DST one week later, or stay on permanent DST year-round), the meaning of that specification changes. In that first year, the Run happens at the wrong time, because it was computed using the older definition. ## How to stop a Temporal Cron Job {#stop-cron-schedules} A Temporal Cron Job does not stop spawning Runs until it has been Terminated or until the [Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout) is reached. A Cancellation Request affects only the current Run. Use the Workflow Id in any requests to Cancel or Terminate. --- ## Dynamic Handler This page discusses [Dynamic Handler](#dynamic-handler). ## What is a Dynamic Handler? {#dynamic-handler} Temporal supports Dynamic Workflows, Activities, Signals, and Queries. :::note Currently, the Temporal SDKs that support Dynamic Handlers are: - [Java](/develop/java/message-passing#dynamic-handler) - [Python](/develop/python/message-passing#dynamic-handler) - [.NET](/develop/dotnet/message-passing#dynamic-handler) - [Go](/develop/go/core-application#set-a-dynamic-workflow) - [Ruby](/develop/ruby/message-passing#dynamic-handler) ::: These are unnamed handlers that are invoked if no other statically defined handler with the given name exists. Dynamic Handlers provide flexibility to handle cases where the names of Workflows, Activities, Signals, or Queries aren't known at run time. :::caution Dynamic Handlers should be used judiciously as a fallback mechanism rather than the primary approach. Overusing them can lead to maintainability and debugging issues down the line. Instead, Workflows, Activities, Signals, and Queries should be defined statically whenever possible, with clear names that indicate their purpose. Use static definitions as the primary way of structuring your Workflows. Reserve Dynamic Handlers for cases where the handler names are not known at compile time and need to be looked up dynamically at runtime. They are meant to handle edge cases and act as a catch-all, not as the main way of invoking logic. ::: --- ## Patching This page discusses [Patching](#patching). ## What is Patching? {#patching} A Patch defines a logical branch in a Workflow for a specific change, similar to a feature flag. It applies a code change to new Workflow Executions while avoiding disruptive changes to in-progress Workflow Executions. When you want to make substantive code changes that may affect existing Workflow executions, create a patch. Note that there's no need to patch [Pinned Workflows](/worker-versioning). ### Detailed Description of the `patched()` Function This applies to the `patched()` function in the Python, .NET, and Ruby SDKs. #### Behavior When Not Replaying If the execution is not replaying, when it encounters a call to `patched()`, it first checks the event history. - If the patch ID is not in the event history, the execution adds a marker to the event history, upserts a search attribute, and returns `true`. This happens in the first block of the patch ID. - If the patch ID is in the event history, the execution doesn't modify the history, and returns `true`. This happens in a patch ID's subsequent blocks, because the event history was updated in the first block. There is a caveat to this behavior, which we will cover below. #### Behavior When Replaying With Marker Before-Or-At Current Location If the execution is replaying and has a call to `patched()`, and if the event history has a marker from a call to `patched()` in the same place (which means it will match the original event history), then it writes a marker to the replay event history and returns `true`. This is similar to the behavior of the non-replay case, and also happens in a given patch ID's first block. If the code has a call to `patched()`, and the event history has a marker with that Patch ID earlier in the history, it will return `true` and will not modify the replay event history. This is also similar to the behavior of the non-replay case, and also happens in a given patch ID's subsequent blocks. #### Behavior When Replaying With Marker After Current Location If the Event History's Marker Event is after the current execution point, that means the new patch is too early. The execution will encounter the new patch before the original. The execution will attempt to write the marker to the replay event history, but it will throw a non-deterministic exception because the replay and original event histories don't match. #### Behavior When Replaying With No Marker For that Patch ID During a Replay, if there is no marker for a given patch ID, the execution will return `false` and will not add a marker to the event history. In addition, all future calls to `patched()` with that ID will return `false` -- even after it is done replaying and is running new code. The [preceding section](#behavior-when-not-replaying) states that if the execution is not replaying, the `patched()` function will always return `true`. If the marker doesn't exist, it will be added, and if the marker already exists, it won't be re-added. However, this behavior doesn't occur if there was already a call to `patched()` with that ID in the replay code, but not in the event history. In this situation, the function won't return `true`. #### A Summary of the Two Potentially Unexpected Behaviors Recapping the potentially unexpected behaviors that may occur during a Replay: If the execution hits a call to `patched()`, but that patch ID isn't _at or before that point_ in the event history, you may not realize that the event history _after_ the current execution location matters. This behavior occurs because: - If that patch ID exists later, you get a non-determinism error - If the patch doesn't exist later, you don't get a non-determinism error, and the call returns `false` If the execution hits a call to `patched()` with an ID that doesn't exist in the history, then not only will it return `false` in that occurence, but it will also return `false` if the execution surpasses the Replay threshold and is running new code. #### Implications of the Behaviors If you deploy new code while Workflows are executing, any Workflows that were in the middle of executing will Replay up to the point they were at when the Worker was shut down. When they do this Replay, they will not follow the `patched()` branches in the code. For the rest of the execution after they have replayed to the point before the deployment and worker restart, they will either: - Use new code if there was no call to `patched()` in the replay code - If there was a call to `patched()` in the replay code, they will run the non-patched code during and after replay This might sound odd, but it's actually exactly what's needed because that means that if the future patched code depends on earlier patched code, then it won't use the new code -- it will use the old code instead. But if there's new code in the future, and there was no code earlier in the body that required the new patch, then it can switch over to the new code, which it will do. Note that this behavior means that the Workflow _does not always run the newest code_. It only does that if not replaying or if replay is surpassed and there hasn't been a call to `patched()` (with that ID) throughout the replay. #### Recommendations Based on this behavior and the implications, when patching in new code, always put the newest code at the top of an if-patched-block. ```python if patched('v3'): # This is the newest version of the code. # put this at the top, so when it is running # a fresh execution and not replaying, # this patched statement will return true # and it will run the new code. pass elif patched('v2'): pass else: pass ``` The following sample shows how `patched()` will behave in a conditional block that's arranged differently. In this case, the code's conditional block doesn't have the newest code at the top. Because `patched()` will return `True` when not Replaying (except with the preceding caveats), this snippet will run the `v2` branch instead of `v3` in new executions. ```python if patched('v2'): # This is bad because when doing a new execution (i.e. not replaying), # patched statements evaluate to True (and put a marker # in the event history), which means that new executions # will use v2, and miss v3 below pass elif patched('v3'): pass else: pass ``` --- ## Schedule This page discusses [Schedule](#schedule). ## What is a Schedule? {#schedule} A Schedule contains instructions for starting a [Workflow Execution](/workflow-execution) at specific times. Schedules provide a more flexible and user-friendly approach than [Temporal Cron Jobs](/cron-job). - [How to enable Schedules](#limitations) - [How to operate Schedules using the Temporal CLI](/cli/schedule) A Schedule has an identity and is independent of a Workflow Execution. This differs from a Temporal Cron Job, which relies on a cron schedule as a property of the Workflow Execution. :::info For triggering a Workflow Execution at a specific one-time future point rather than on a recurring schedule, the [Start Delay](/workflow-execution/timers-delays#delay-workflow-execution) option should be used instead of a Schedule. ::: ### Action The Action of a Schedule is where the Workflow Execution properties are established, such as Workflow Type, Task Queue, parameters, and timeouts. Workflow Executions started by a Schedule have the following additional properties: - The Action's timestamp is appended to the Workflow Id. - The `TemporalScheduledStartTime` [Search Attribute](/search-attribute) is added to the Workflow Execution. The value is the Action's timestamp. - The `TemporalScheduledById` Search Attribute is added to the Workflow Execution. The value is the Schedule Id. ### Spec The Schedule Spec defines when the Action should be taken. Unless many Schedules have Actions scheduled at the same time, Actions should generally start within 1 second of the specified time. There are two kinds of Schedule Spec: - A simple interval, like "every 30 minutes" (aligned to start at the Unix epoch, and optionally including a phase offset). - A calendar-based expression, similar to the "cron expressions" supported by lots of software, including the older Temporal Cron feature. These two kinds have multiple representations, depending on the interface or SDK you're using, but they all support the same features. In the Temporal CLI, for example, an interval is specified as a string like `45m` to mean every 45 minutes, or `6h/5h` to mean every 6 hours but at the start of the fifth hour within each period. In the Temporal CLI, a calendar expression can be specified as either a traditional cron string with five (or six or seven) positional fields, or as JSON with named fields: ```json { "year": "2022", "month": "Jan,Apr,Jul,Oct", "dayOfMonth": "1,15", "hour": "11-14" } ``` The following calendar JSON fields are available: - `year` - `month` - `dayOfMonth` - `dayOfWeek` - `hour` - `minute` - `second` - `comment` Each field can contain a comma-separated list of ranges (or the `*` wildcard), and each range can include a slash followed by a skip value. The `hour`, `minute`, and `second` fields default to `0` while the others default to `*`, so you can describe many useful specs with only a few fields. For `month`, names of months may be used instead of integers (case-insensitive, abbreviations permitted). For `dayOfWeek`, day-of-week names may be used. The `comment` field is optional and can be used to include a free-form description of the intent of the calendar spec, useful for complicated specs. No matter which form you supply, calendar and interval specs are converted to canonical representations. What you see when you "describe" or "list" a Schedule might not look exactly like what you entered, but it has the same meaning. Other Spec features: **Multiple intervals/calendar expressions:** A Spec can have combinations of multiple intervals and/or calendar expressions to define a specific Schedule. **Time bounds:** Provide an absolute start or end time (or both) with a Spec to ensure that no actions are taken before the start time or after the end time. **Exclusions:** A Spec can contain exclusions in the form of zero or more calendar expressions. This can be used to express scheduling like "each Monday at noon except for holidays". You'll have to provide your own set of exclusions and include it in each schedule; there are no pre-defined sets. (This feature isn't currently exposed in the Temporal CLI or the Temporal Web UI.) **Jitter:** If given, a random offset between zero and the maximum jitter is added to each Action time (but bounded by the time until the next scheduled Action). **Time zones:** By default, calendar-based expressions are interpreted in UTC. Temporal recommends using UTC to avoid various surprising properties of time zones. If you don't want to use UTC, you can provide the name of a time zone. The time zone definition is loaded on the Temporal Server Worker Service from either disk or the fallback embedded in the binary. For more operational control, embed the contents of the time zone database file in the Schedule Spec itself. (Note: this isn't currently exposed in the Temporal CLI or the web UI.) ### Pause A Schedule can be Paused. When a Schedule is Paused, the Spec has no effect. However, you can still force manual actions by using the [temporal schedule trigger](/cli/schedule#trigger) command. To assist communication among developers and operators, a “notes” field can be updated on pause or resume to store an explanation for the current state. ### Backfill A Schedule can be Backfilled. When a Schedule is Backfilled, all the Actions that would have been taken over a specified time period are taken now (in parallel if the `AllowAll` [Overlap Policy](#overlap-policy) is used; sequentially if `BufferAll` is used). You might use this to fill in runs from a time period when the Schedule was paused due to an external condition that's now resolved, or a period before the Schedule was created. ### Limit number of Actions A Schedule can be limited to a certain number of scheduled Actions (that is, not trigger immediately). After that it will act as if it were paused. ### Policies A Schedule supports a set of Policies that enable customizing behavior. #### Overlap Policy The Overlap Policy controls what happens when it is time to start a Workflow Execution but a previously started Workflow Execution is still running. The following options are available: - `Skip`: **Default**. Nothing happens; the Workflow Execution is not started. - `BufferOne`: Starts the Workflow Execution as soon as the current one completes. The buffer is limited to one. If another Workflow Execution is supposed to start, but one is already in the buffer, only the one in the buffer eventually starts. - `BufferAll`: Allows an unlimited number of Workflows to buffer. They are started sequentially. - `CancelOther`: Cancels the running Workflow Execution, and then starts the new one after the old one completes cancellation. - `TerminateOther`: Terminates the running Workflow Execution and starts the new one immediately. - `AllowAll` Starts any number of concurrent Workflow Executions. With this policy (and only this policy), more than one Workflow Execution, started by the Schedule, can run simultaneously. #### Catchup Window The Temporal Service might be down or unavailable at the time when a Schedule should take an Action. When it comes back up, the Catchup Window controls which missed Actions should be taken at that point. The default is one year, meaning Actions will be taken unless over one year late. If your Actions are more time-sensitive, you can set the Catchup Window to a smaller value (minimum ten seconds), accepting that an outage longer than the window could lead to missed Actions. (But you can always [Backfill](#backfill).) #### Pause-on-failure If this policy is set, a Workflow Execution started by a Schedule that ends with a failure or timeout (but not Cancellation or Termination) causes the Schedule to automatically pause. Note that with the `AllowAll` Overlap Policy, this pause might not apply to the next Workflow Execution, because the next Workflow Execution might have started before the failed one finished. It applies only to Workflow Executions that were scheduled to start after the failed one finished. ### Last completion result A Workflow started by a Schedule can obtain the completion result from the most recent successful run. (How you do this depends on the SDK you're using.) For overlap policies that don't allow overlap, “the most recent successful run” is straightforward to define. For the `AllowAll` policy, it refers to the run that completed most recently, at the time that the run in question is started. Consider the following overlapping runs: ``` time --------------------------------------------> A |----------------------| B |-------| C |---------------| D |--------------T ``` If D asks for the last completion result at time T, it gets the result of A. Not B, even though B started more recently, because A completed later. And not C, even though C completed after A, because the result for D is captured when D is started, not when it's queried. Failures and timeouts do not affect the last completion result. :::note When a Schedule triggers a Workflow that completes successfully and yields a result, the result from the initial Schedule execution can be accessed by the subsequent scheduled execution through `LastCompletionResult`. Be aware that if, during the subsequent run, the Workflow employs the [Continue-As-New](/workflow-execution/continue-as-new) feature, `LastCompletionResult` won't be accessible for this new Workflow iteration. It is important to note that the [status](/workflow-execution#workflow-execution-status) of the subsequent run is marked as `Continued-As-New` and not as `Completed`. ::: :::caution A scheduled Workflow Execution may complete with a result up to the maximum blob size (2 MiB by default). However, due to internal limitations, results that are within 1 KiB of this limit cannot be passed to the next execution. So, for example, a Workflow Execution that returns a result of size 2,096,640 bytes (which is above 2MiB - 1KiB limit) will be allowed to compete successfully, but that value will not be available as a last completion result. This limitation may be lifted in the future. ::: ### Last failure A Workflow started by a Schedule can obtain the details of the failure of the most recent run that ended at the time when the Workflow in question was started. Unlike last completion result, a _successful_ run _does_ reset the last failure. ### Limitations Internally, a Schedule is implemented as a Workflow. If you're using Advanced Visibility (Elasticsearch), these Workflow Executions are hidden from normal views. If you're using Standard Visibility, they are visible, though there's no need to interact with them directly. --- ## Temporal Workflow Definition This pages covers the following: - [What is a Workflow Definition?](/workflow-definition) - [Determinism and constraints](#deterministic-constraints) - [Handling code changes and non-deterministic behavior](#non-deterministic-change) - [Intrinsic non-determinism logic](#intrinsic-nondeterministic-logic) - [Versioning Workflow code and Patching](#workflow-versioning) - [Handling unreliable Worker Processes](#unreliable-worker-processes) - [What is a Workflow Type?](#workflow-type) A Temporal Workflow defines the overall flow of the application. Conceptually, a Workflow is a sequence of steps written in a general-purpose programming language. With Temporal, those steps are defined by writing code, known as a Workflow Definition, and are carried out by running that code, which results in a Workflow Execution. In day-to-day conversations, the term _Workflow_ might refer to [Workflow Type](#workflow-type), a [Workflow Definition](/workflow-definition), or a [Workflow Execution](/workflow-execution). Temporal documentation aims to be explicit and differentiate between them. ## What is a Workflow Definition? {#workflow-definition} A Workflow Definition is the code that defines the Workflow. It is written with a programming language and corresponding Temporal SDK. Depending on the programming language, it's typically implemented as a function or an object method and encompasses the end-to-end series of steps of a Temporal application. Below are different ways to develop a basic Workflow Definition. **[Workflow Definition in Go](/develop/go/core-application#develop-workflows)** ```go func YourBasicWorkflow(ctx workflow.Context) error { // ... return nil } ``` **[Workflow Definition in Java (Interface)](/develop/java/core-application#develop-workflows)** ```java // Workflow interface @WorkflowInterface public interface YourBasicWorkflow { @WorkflowMethod String workflowMethod(Arguments args); } ``` **[Workflow Definition in Java (Implementation)](/develop/java/core-application#develop-workflows)** ```java // Workflow implementation public class YourBasicWorkflowImpl implements YourBasicWorkflow { // ... } ``` **[Workflow Definition in PHP (Interface)](/develop/php/core-application#develop-workflows)** ```php #[WorkflowInterface] interface YourBasicWorkflow { #[WorkflowMethod] public function workflowMethod(Arguments args); } ``` **[Workflow Definition in PHP (Implementation)](/develop/php/core-application#develop-workflows)** ```php class YourBasicWorkflowImpl implements YourBasicWorkflow { // ... } ``` **[Workflow Definition in Python](/develop/python/core-application#develop-workflows)** ```Python @workflow.defn class YourWorkflow: @workflow.run async def YourBasicWorkflow(self, input: str) -> str: # ... ``` **[Workflow Definition in Typescript](/develop/typescript/core-application#develop-workflows)** ```Typescript type BasicWorkflowArgs = { param: string; }; export async function WorkflowExample( args: BasicWorkflowArgs, ): Promise<{ result: string }> { // ... } ``` **[Workflow Definition in C# and .NET](/develop/dotnet/core-application#develop-workflow)** ```csharp [Workflow] public class YourBasicWorkflow { [WorkflowRun] public async Task workflowExample(string param) { // ... } } ``` A Workflow Definition may be also referred to as a Workflow Function. In Temporal's documentation, a Workflow Definition refers to the source for the instance of a Workflow Execution, while a Workflow Function refers to the source for the instance of a Workflow Function Execution. A Workflow Execution effectively executes once to completion, while a Workflow Function Execution occurs many times during the life of a Workflow Execution. We strongly recommend that you write a Workflow Definition in a language that has a corresponding Temporal SDK. ### Deterministic constraints {#deterministic-constraints} A critical aspect of developing Workflow Definitions is ensuring that they are deterministic. Generally speaking, this means you must take care to ensure that any time your Workflow code is executed it makes the same Workflow API calls in the same sequence, given the same input. Some changes to those API calls are safe to make. For example, you can change: - The input parameters, return values, and execution timeouts of Child Workflows and Activities - However, it is not safe to change the types or IDs of Child Workflows or Activities - The input parameters used to Signal an external Workflow - The duration of Timers (although changing them to 0 is not safe in all SDKs) - Add or remove calls to Workflow APIs that don't produce [Commands](/workflow-execution#command) (For example - `workflow.GetInfo` in the Go SDK or its equivalent in other SDKs) The following Workflow API calls all can produce Commands, and thus must not be reordered, added, or removed without proper [Versioning techniques](#workflow-versioning): - Starting or cancelling a Timer - Scheduling or cancelling Activity Executions (including local Activities) - Starting or cancelling Child Workflow executions - Signalling or cancelling signals to external Workflow Executions - Scheduling or cancelling Nexus operations - Ending the Workflow Execution in any way (completing, failing, cancelling, or continuing-as-new) - `Patched` or `GetVersion` calls for Versioning (although they may be added or removed according to the [patching](#workflow-patching) rules) - Upserting Workflow Search Attributes - Upserting Workflow Memos - Running a `SideEffect` or `MutableSideEffect` For a complete reference, see the [Command reference](/references/commands). More formally, the use of certain Workflow APIs in the function is what generates Commands. Commands tell the Temporal Service which Events to create and add to the Workflow Execution's [Event History](/workflow-execution/event#event-history). When the Workflow's code [replays](/workflow-execution#replay), the Commands that are emitted are compared with the existing Event History. If a corresponding Event already exists within the Event History that matches that command, then the Execution progresses. See [Event History](/encyclopedia/event-history/) for a detailed walkthrough of the process. For example, using an SDK's "Execute Activity" API generates the [ScheduleActivityTask](/references/commands#scheduleactivitytask) Command. When this API is called upon re-execution, that Command is compared with the Event that is in the same location within the sequence. The Event in the sequence must be an [ActivityTaskScheduled](/references/events#activitytaskscheduled) Event, where the Activity name is the same as what is in the Command. If a generated Command doesn't match what it needs to in the existing Event History, then the Workflow Execution returns a _non-deterministic_ error. The following are the two reasons why a Command might be generated out of sequence or the wrong Command might be generated altogether: 1. Code changes are made to a Workflow Definition that is in use by a running Workflow Execution. 2. There is intrinsic non-deterministic logic (such as inline random branching). ### Code changes can cause non-deterministic behavior {#non-deterministic-change} The Workflow Definition can change in very limited ways once there is a Workflow Execution depending on it. To alleviate non-deterministic issues that arise from code changes, we recommend using [Workflow Versioning](#workflow-versioning). For example, let's say we have a Workflow Definition that defines the following sequence: 1. Start and wait on a Timer/sleep. 2. Spawn and wait on an Activity Execution. 3. Complete. We start a Worker and spawn a Workflow Execution that uses that Workflow Definition. The Worker would emit the [StartTimer](/references/commands#starttimer) Command and the Workflow Execution would become suspended. Before the Timer is up, we change the Workflow Definition to the following sequence: 1. Spawn and wait on an Activity Execution. 2. Start and wait on a Timer/sleep. 3. Complete. When the Timer fires, the next Workflow Task will cause the Workflow Function to re-execute. The first Command the Worker sees would be ScheduleActivityTask Command, which wouldn't match up to the expected [TimerStarted](/references/events#timerstarted) Event. The Workflow Execution would fail and return a nondeterminism error. The following are examples of minor changes that would not result in non-determinism errors when re-executing a History which already contain the Events: - Changing the duration of a Timer, with the following exceptions: - In Java, Python, and Go, changing a Timer's duration from or to 0 is a non-deterministic behavior. - In .NET, changing a Timer's duration from or to -1 (which means "infinite") is a non-deterministic behavior. - Changing the arguments to: - The Activity Options in a call to spawn an Activity Execution (local or nonlocal). - The Child Workflow Options in a call to spawn a Child Workflow Execution. - Call to Signal an External Workflow Execution. - Adding a Signal Handler for a Signal Type that has not been sent to this Workflow Execution. ### Intrinsic non-deterministic logic {#intrinsic-nondeterministic-logic} Intrinsic non-determinism is when a Workflow Function Execution might emit a different sequence of Commands on re-execution, regardless of whether all the input parameters are the same. For example, a Workflow Definition can not have inline logic that branches (emits a different Command sequence) based off a local time setting or a random number. In the representative pseudocode below, the `local_clock()` function returns the local time, rather than Temporal-defined time: ```text fn your_workflow() { if local_clock().is_before("12pm") { await workflow.sleep(duration_until("12pm")) } else { await your_afternoon_activity() } } ``` Each Temporal SDK offers APIs that enable Workflow Definitions to have logic that gets and uses time, random numbers, and data from unreliable resources. When those APIs are used, the results are stored as part of the Event History, which means that a re-executed Workflow Function will issue the same sequence of Commands, even if there is branching involved. In other words, all operations that do not purely mutate the Workflow Execution's state should occur through a Temporal SDK API. ### Versioning Workflows {#workflow-versioning} The Temporal Platform requires that Workflow code (Workflow Definitions) be deterministic in nature. This requirement means that developers should consider how they plan to handle changes to Workflow code over time. A versioning strategy is even more important if your Workflow Executions live long enough to run on multiple versions of your Worker. Temporal Platform provides Workflow Versioning APIs. Temporal offers two Versioning strategies: - [Worker Versioning](#worker-versioning): keep Workers tied to specific code revisions, so that old Workers can run old code paths and new Workers can run new code paths. :::note Support for the experimental method of Worker Versioning prior to 2025 will be removed from Temporal Server in March 2026. Refer to the [latest Worker Versioning docs](/worker-versioning) for guidance. ::: - [Versioning with patching](#workflow-patching): make sure your code changes are compatible across versions of your Workflow. You can use either strategy, or a combination. #### Worker Versioning {#worker-versioning} This is the **recommended** way to handle versioning and users see improved error rates when adopting it. To learn more about Worker Versioning, see our [Worker Versioning in production](production-deployment/worker-deployments/worker-versioning) page. #### Versioning with Patching {#workflow-patching} When keeping Workflows compatible, you should patch and ideally how to test your running Workflows will be safe to run on a new code version. To patch: - [How to patch Workflow code in Go](/develop/go/versioning#patching) - [How to patch Workflow code in Java](/develop/java/versioning#patching) - [How to patch Workflow code in Python](/develop/python/versioning#patching) - [How to patch Workflow code in PHP](/develop/php/versioning#php-sdk-patching-api) - [How to patch Workflow code in TypeScript](/develop/typescript/versioning#patching) - [How to patch Workflow code in .NET](/develop/dotnet/versioning#patching) To test, see [Safe Deployments](/develop/safe-deployments.mdx). ### Handling unreliable Worker Processes {#unreliable-worker-processes} You do not handle Worker Process failure or restarts in a Workflow Definition. Workflow Function Executions are completely oblivious to the Worker Process in terms of failures or downtime. The Temporal Platform ensures that the state of a Workflow Execution is recovered and progress resumes if there is an outage of either Worker Processes or the Temporal Service itself. The only reason a Workflow Execution might fail is due to the code throwing an error or exception, not because of underlying infrastructure outages. ### What is a Workflow Type? {#workflow-type} A Workflow Type is a name that maps to a Workflow Definition. - A single Workflow Type can be instantiated as multiple Workflow Executions. - A Workflow Type is scoped by a Task Queue. It is acceptable to have the same Workflow Type name map to different Workflow Definitions if they are using completely different Workers. --- ## Continue-As-New This page discusses [Continue-As-New](#continue-as-new) and how to decide [when to use it](#when). ## What is Continue-As-New? {#continue-as-new} Continue-As-New allows you to checkpoint your Workflow's state and start a fresh Workflow. There are two main reasons you might want to start a new Workflow: - A Workflow Execution with a long, or large [Event History](/workflow-execution/event#event-history), such as one calling many Activities, may bog down and have performance issues. It could even generate more Events than allowed by the [Event History limits](/workflow-execution/event#event-history-limits). - A Workflow Execution can hit [Workflow Versioning](/workflow-definition#workflow-versioning) problems if it started running on an older version of your code and then begins executing on a newer version. Your goal is to create a new Workflow with a fresh history that picks up where your last one left off. First, pass your latest relevant state into Continue-As-New. This hands it to a new Execution in the [Execution Chain](/workflow-execution#workflow-execution-chain). This state is passed in as arguments to your Workflow. The parameters are typically optional and left unset by the original caller of the Workflow. The new Workflow Execution has the same Workflow Id, but a different Run Id, and starts its own Event History. You can repeat Continue-As-New as often as needed, which means that your Workflow can run forever. Workflows that do this are often called Entity Workflows because they represent durable objects, not just processes. - [How to Continue-As-New using the Go SDK](/develop/go/continue-as-new#how) - [How to Continue-As-New using the Java SDK](/develop/java/continue-as-new) - [How to Continue-As-New using the PHP SDK](/develop/php/continue-as-new) - [How to Continue-As-New using the Python SDK](/develop/python/continue-as-new#how) - [How to Continue-As-New using the TypeScript SDK](/develop/typescript/continue-as-new) - [How to Continue-As-New using the .NET SDK](/develop/dotnet/continue-as-new) ## When in your Workflow is it right to Continue-As-New? {#when} Temporal will tell your Workflow when it's approaching performance or scalability problems. Find out if it's time by checking Continue-As-New Suggested in your Workflow at spots in your implementation where you are ready to checkpoint your state. To prevent long-running Workflows from running on stale versions of code, you may also want to Continue-as-New periodically, depending on how often you deploy. This makes sure you're running only a couple of versions, which avoids some backwards compatibility problems. - [Determine when to Continue-As-New using the Go SDK](/develop/go/continue-as-new#when) - [Determine when to Continue-As-New using the Java SDK](/develop/java/continue-as-new) - [Determine when to Continue-As-New using the PHP SDK](/develop/php/continue-as-new) - [Determine when to Continue-As-New using the Python SDK](/develop/python/continue-as-new#when) - [Determine when to Continue-As-New using the TypeScript SDK](/develop/typescript/continue-as-new) - [Determine when to Continue-As-New using the .NET SDK](/develop/dotnet/continue-as-new) --- ## Events and Event History This page discusses the following: - [Events](#event) - [Activity Events](#activity-events) - [Event History](#event-history) - [Event Loop](#event-loop) - [Time Constraints](#time-constraints) - [Reset](#reset) - [Side Effect](#side-effect) The Temporal Service tracks the progress of each Workflow Execution by appending information about Events, such as when the Workflow Execution began or ended, to the Event History associated with that execution. This information not only enables developers to know what took place, but is also essential for providing Durable Execution, since it enables the Workflow Execution to recover from a crash and continue making progress. In order to maintain high performance, the Temporal Service places limits on both the number and size of items in the Event History for each Workflow Execution. ## What is an Event? {#event} Events are created by the Temporal Service in response to external occurrences and Commands generated by a Workflow Execution. Each Event corresponds to an `enum` that is defined in the [Server API](https://github.com/temporalio/api/blob/master/temporal/api/enums/v1/event_type.proto). All Events are recorded in the [Event History](#event-history). A list of all possible Events that could appear in a Workflow Execution Event History is provided in the [Event reference](/references/events). ### Activity Events {#activity-events} Seven Activity-related Events are added to Event History at various points in an Activity Execution: - After a [Workflow Task Execution](/tasks#activity-task-execution) reaches a line of code that starts/executes an Activity, the Worker sends the Activity Type and arguments to the Temporal Service, and the Temporal Service adds an [ActivityTaskScheduled](/references/events#activitytaskscheduled) Event to Event History. - When `ActivityTaskScheduled` is added to History, the Temporal Service adds a corresponding Activity Task to the Task Queue. - A Worker polling that Task Queue picks up the Activity Task and runs the Activity function or method. - If the Activity function returns, the Worker reports completion to the Temporal Service, and the Temporal Service adds [ActivityTaskStarted](/references/events#activitytaskstarted) and [ActivityTaskCompleted](/references/events#activitytaskcompleted) to Event History. - If the Activity function throws a [non-retryable Failure](/references/failures#non-retryable), the Temporal Service adds [ActivityTaskStarted](/references/events#activitytaskstarted) and [ActivityTaskFailed](/references/events#activitytaskfailed) to Event History. - If the Activity function throws an error or retryable Failure, the Temporal Service schedules an Activity Task retry to be added to the Task Queue (unless you’ve reached the Maximum Attempts value of the [Retry Policy](/encyclopedia/retry-policies), in which case the Temporal Service adds [ActivityTaskStarted](/references/events#activitytaskstarted) and [ActivityTaskFailed](/references/events#activitytaskfailed) to Event History). - If the Activity’s [Start-to-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout) passes before the Activity function returns or throws, the Temporal Service schedules a retry. - If the Activity’s [Schedule-to-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout) passes before Activity Execution is complete, or if [Schedule-to-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout) passes before a Worker gets the Activity Task, the Temporal Service writes [ActivityTaskTimedOut](/references/events#activitytasktimedout) to Event History. - If the Activity is [canceled](/activity-execution#cancellation), the Temporal Service writes [ActivityTaskCancelRequested](/references/events#activitytaskcancelrequested) to Event History, and if the Activity accepts cancellation, the Temporal Service writes [ActivityTaskCanceled](/references/events#activitytaskcanceled). :::note While the Activity is running and retrying, [ActivityTaskScheduled](/references/events#activitytaskscheduled) is the only Activity-related Event in History: [ActivityTaskStarted](/references/events#activitytaskstarted) is written along with a terminal Event like [ActivityTaskCompleted](/references/events#activitytaskcompleted) or [ActivityTaskFailed](/references/events#activitytaskfailed). ::: ### What is an Event History? {#event-history} An append-only log of [Events](#event) for your application. - Event History is durably persisted by the Temporal service, enabling seamless recovery of your application state from crashes or failures. - It also serves as an audit log for debugging. ### Event History limits {#event-history-limits} The Temporal Service stores the complete Event History for the entire lifecycle of a Workflow Execution. The Temporal Service logs a [warning after 10,240 Events](/workflow-execution/limits) and periodically logs additional warnings as new Events are added. The Workflow Execution is terminated when the Event History: - exceeds 51,200 Events. - contains more than 2000 Updates. - contains more than 10000 Signals. To avoid hitting these limits, you can use the [Continue-As-New](/workflow-execution/continue-as-new) feature to close the current Workflow Execution and create a new one. ### Event loop {#event-loop} A Workflow Execution is made up of a sequence of [Events](#event) called an [Event History](#event-history). Events are created by the Temporal Service in response to either Commands or actions requested by a Temporal Client (such as a request to spawn a Workflow Execution). ## Time constraints {#time-constraints} **Is there a limit to how long Workflows can run?** No, there is no time constraint on how long a Workflow Execution can run. However, if your Workflow will perform many actions, or will receive many messages, it can run into [Event History limits](#event-history-limits). It can also hit [Workflow Versioning](/workflow-definition#workflow-versioning) and other backwards incompatibility problems. For these reasons, it can be a good idea to [Continue-As-New](/workflow-execution/continue-as-new) periodically. ## What is a Reset? {#reset} A Reset terminates a [Workflow Execution](/workflow-execution) and creates a new Workflow Execution with the same [Workflow Type](/workflow-definition#workflow-type) and [Workflow ID](/workflow-execution/workflowid-runid). The [Event History](/workflow-execution/event#event-history) is copied from the original execution up to and including the reset point. The new execution continues from the reset point. Valid reset points are: `WorkflowTaskStarted`, `WorkflowTaskCompleted`, `WorkflowTaskTimedOut`, and `WorkflowTaskFailed`. Signals in the original history can be optionally copied to the new history, whether they appear after the reset point or not. ## What is a Side Effect? {#side-effect} :::note Side Effects are included in the Go, Java, and PHP SDKs. They are not included in other SDKs. [Local Activities](/local-activity) fit the same use case and are slightly less resource intensive. ::: A Side Effect is a way to execute a short, non-deterministic code snippet, such as generating a UUID, that executes the provided function once and records its result into the Workflow Execution Event History. A Side Effect does not re-execute upon replay, but instead returns the recorded result. Do not ever have a Side Effect that could fail, because failure could result in the Side Effect function executing more than once. If there is any chance that the code provided to the Side Effect could fail, use an Activity. --- ## Workflow Execution Limits This page discusses [Workflow Execution limits](#workflow-execution-limits), [Workflow Execution Callback limits](#workflow-execution-callback-limits), and [Nexus Operation limits](#workflow-execution-nexus-operation-limits). ## Limits {#workflow-execution-limits} There is no limit to the number of concurrent Workflow Executions, albeit you must abide by the Workflow Execution's Event History limit. :::caution As a precautionary measure, the Workflow Execution's Event History is limited to [51,200 Events](https://github.com/temporalio/temporal/blob/e3496b1c51bfaaae8142b78e4032cc791de8a76f/service/history/configs/config.go#L382) or [50 MB](https://github.com/temporalio/temporal/blob/e3496b1c51bfaaae8142b78e4032cc791de8a76f/service/history/configs/config.go#L380) and will warn you after 10,240 Events or 10 MB. ::: There is also a limit to the number of certain types of incomplete operations. Each in-progress Activity generates a metadata entry in the Workflow Execution's mutable state. Too many entries in a single Workflow Execution's mutable state causes unstable persistence. To protect the system, Temporal enforces a maximum number of incomplete Activities, Child Workflows, Signals, or Cancellation requests per Workflow Execution (by default, 2,000 for each type of operation). Once the limit is reached for a type of operation, if the Workflow Execution attempts to start another operation of that type (by producing a `ScheduleActivityTask`, `StartChildWorkflowExecution`, `SignalExternalWorkflowExecution`, or `RequestCancelExternalWorkflowExecution` Command), it will be unable to (the Workflow Task Execution will fail and get retried). These limits are set with the following [dynamic configuration keys](https://github.com/temporalio/temporal/blob/main/service/history/configs/config.go): - `NumPendingActivitiesLimit` - `NumPendingChildExecutionsLimit` - `NumPendingSignalsLimit` - `NumPendingCancelRequestsLimit` ## Workflow Execution Callback limits {#workflow-execution-callback-limits} There is a limit to the total number of Workflow Callbacks that may be attached to a single Workflow Execution (by default, 32 Workflow Callbacks). Attaching [multiple Nexus callers to a handler Workflow](/nexus/operations#attaching-multiple-nexus-callers) may exceed these limits. These limits can be set with the following [dynamic configuration keys](https://github.com/temporalio/temporal/blob/main/common/dynamicconfig/constants.go#L924): - MaxCallbacksPerWorkflow ## Workflow Execution Nexus Operation Limits {#workflow-execution-nexus-operation-limits} There is a limit to the maximum number of Nexus Operations in a Workflow before Continue-As-New is required. Each in-progress Nexus Operation generates a metadata entry in the Workflow Execution's mutable state. Too many entries in a single Workflow Execution's mutable state causes unstable persistence. To protect the system, Temporal enforces a maximum number of incomplete Nexus Operation requests per Workflow Execution (by default, 30 Nexus Operations). Once the limit is reached for a type of operation, if the Workflow Execution attempts to start another Nexus operation (by producing a ScheduleNexusOperation), it will be unable to do so (the Workflow Task Execution will fail and get retried). These limits are set with the following [dynamic configuration keys](https://github.com/temporalio/temporal/blob/de7c8879e103be666a7b067cc1b247f0ac63c25c/components/nexusoperations/config.go#L38): - MaxConcurrentOperations --- ## Timers and Start Delays This page discusses [Timer](#timer) and [Start Delay](#delay-workflow-execution). ## What is a Timer? {#timer} Temporal SDKs offer Timer APIs so that Workflow Executions are deterministic in their handling of time values. Timers in Temporal are persisted, meaning that even if your Worker or Temporal Service is down when the time period completes, as soon as your Worker and Temporal Service become available, the call that is awaiting the Timer in your Workflow code will resolve, causing execution to proceed. Timers are reliable and efficient. Workers consume no additional resources while waiting for a Timer to fire, so a single Worker can await millions of Timers concurrently. - [How to set Timers in Go](/develop/go/timers) - [How to set Timers in Java](/develop/java/timers) - [How to set Timers in PHP](/develop/php/timers) - [How to set Timers in Python](/develop/python/timers) - [How to set Timers in TypeScript](/develop/typescript/timers) - [How to set Timers in .NET](/develop/dotnet/durable-timers) The duration of a Timer is fixed, and your Workflow might specify a value as short as one second or as long as several years. Although it's possible to specify an extremely precise duration, such as 36 milliseconds or 15.072 minutes, your Workflows should not rely on sub-second accuracy for Timers. We recommend that you consider the duration as a minimum time, one which will be rounded up slightly due to the latency involved with scheduling and firing the Timer. For example, setting a Timer for 11.97 seconds is guaranteed to delay execution for at least that long, but will likely be closer to 12 seconds in practice. ## What is a Start Delay? {#delay-workflow-execution} :::tip COMPATIBILITY Start Delay Workflow Execution is incompatible with both [Schedules](/schedule) and [Cron Jobs](/cron-job). ::: Start Delay determines the amount of time to wait before initiating a Workflow Execution. This is useful if you have a Workflow you want to schedule out in the future, but only want it to execute once: in comparison to reoccurring Workflows using Schedules. If the Workflow receives a Signal-With-Start or Update-With-Start during the delay, it dispatches a Workflow Task and the remaining delay is bypassed. If the Workflow receives a Signal during the delay that is not a Signal-With-Start, the Signal does not interrupt the delay, and the Workflow continues to be delayed until the delay expires or a Signal-With-Start is received. You can delay the dispatch of the initial Workflow Execution by setting this option in the Workflow Options field of your chosen SDK. This delay only applies to the initial Workflow Execution and does not affect subsequent executions, such as when the Workflow Continues-as-New. --- ## Temporal Workflow Execution Overview This page provides an overview of Workflow Execution: - [What is a Workflow Execution?](#workflow-execution) - [Replay](#replay) - [Commands and awaitables](#commands-awaitables) - [What is a Command?](#command) - [Checking Workflow Execution Status](#workflow-execution-status) - [Workflow Execution Chain](#workflow-execution-chain) - [Memo](#memo) - [State Transition](#state-transition) ## What is a Workflow Execution? {#workflow-execution} While the Workflow Definition is the code that defines the Workflow, the Workflow Execution is created by executing that code. A Temporal Workflow Execution is a durable, reliable, and scalable function execution. It is the main unit of execution of a [Temporal Application](/temporal#temporal-application). - [How to start a Workflow Execution using temporal](/cli/workflow#start) - [How to start a Workflow Execution using the Go SDK](/develop/go/temporal-client#start-workflow-execution) - [How to start a Workflow Execution using the Java SDK](/develop/java/temporal-client#start-workflow-execution) - [How to start a Workflow Execution using the PHP SDK](/develop/php/temporal-client#start-workflow-execution) - [How to start a Workflow Execution using the Python SDK](/develop/python/temporal-client#start-workflow-execution) - [How to start a Workflow Execution using the TypeScript SDK](/develop/typescript/temporal-client#start-workflow-execution) - [How to start a Workflow Execution using the .NET SDK](/develop/dotnet/temporal-client#start-workflow) Each Temporal Workflow Execution has exclusive access to its local state. It executes concurrently to all other Workflow Executions, and communicates with other Workflow Executions through [Signals](/sending-messages#sending-signals) and the environment through [Activities](/activities). While a single Workflow Execution has limits on size and throughput, a Temporal Application can consist of millions to billions of Workflow Executions. **Durability** Durability is the absence of an imposed time limit. A Workflow Execution is durable because it executes a Temporal Workflow Definition (also called a Temporal Workflow Function), your application code, effectively once and to completion—whether your code executes for seconds or years. **Reliability** Reliability is responsiveness in the presence of failure. A Workflow Execution is reliable, because it is fully recoverable after a failure. The Temporal Platform ensures the state of the Workflow Execution persists in the face of failures and outages and resumes execution from the latest state. **Scalability** Scalability is responsiveness in the presence of load. A single Workflow Execution is limited in size and throughput but is scalable because it can [Continue-As-New](/workflow-execution/continue-as-new) in response to load. A Temporal Application is scalable because the Temporal Platform is capable of supporting millions to billions of Workflow Executions executing concurrently, which is realized by the design and nature of the [Temporal Service](/temporal-service) and [Worker Processes](/workers#worker-process). ### Replays {#replay} A Replay is the method by which a Workflow Execution resumes making progress. During a Replay the Commands that are generated are checked against an existing Event History. Replays are necessary and often happen to give the effect that Workflow Executions are resumable, reliable, and durable. For more information, see [Deterministic constraints](/workflow-definition#deterministic-constraints). If a failure occurs, the Workflow Execution picks up where the last recorded event occurred in the Event History. - [How to use Replay APIs using the Go SDK](/develop/go/testing-suite#replay) - [How to use Replay APIs using the Java SDK](/develop/java/testing-suite#replay) - [How to use Replay APIs using the Python SDK](/develop/python/testing-suite#replay) - [How to use Replay APIs using the TypeScript SDK](/develop/typescript/testing-suite#replay) - [How to use Replay APIs using the .NET SDK](/develop/dotnet/testing-suite#replay) ### Commands and awaitables {#commands-awaitables} A Workflow Execution does two things: 1. Issue [Commands](#command). 2. Wait on an Awaitables (often called Futures). Commands are issued and Awaitables are provided by the use of Workflow APIs in the [Workflow Definition](/workflow-definition). Commands are generated whenever the Workflow Function is executed. The Worker Process supervises the Command generation and makes sure that it maps to the current Event History. (For more information, see [Deterministic constraints](/workflow-definition#deterministic-constraints).) The Worker Process batches the Commands and then suspends progress to send the Commands to the Temporal Service whenever the Workflow Function reaches a place where it can no longer progress without a result from an Awaitable. A Workflow Execution may only ever block progress on an Awaitable that is provided through a Temporal SDK API. Awaitables are provided when using APIs for the following: - Awaiting: Progress can block using explicit "Await" APIs. - Requesting cancellation of another Workflow Execution: Progress can block on confirmation that the other Workflow Execution is cancelled. - Sending a [Signal](/sending-messages#sending-signals): Progress can block on confirmation that the Signal sent. - Spawning a [Child Workflow Execution](/child-workflows): Progress can block on confirmation that the Child Workflow Execution started, and on the result of the Child Workflow Execution. - Spawning an [Activity Execution](/activity-execution): Progress can block on the result of the Activity Execution. - Starting a Timer: Progress can block until the Timer fires. ### What is a Command? {#command} A Command is a requested action issued by a [Worker](/workers#worker) to the [Temporal Service](/temporal-service) after a [Workflow Task Execution](/tasks#workflow-task-execution) completes. The action that the Temporal Service takes is recorded in the [Workflow Execution's](#workflow-execution) [Event History](/workflow-execution/event#event-history) as an [Event](/workflow-execution/event). The Workflow Execution can await on some of the Events that come as a result from some of the Commands. Commands are generated by the use of Workflow APIs in your code. During a Workflow Task Execution there may be several Commands that are generated. The Commands are batched and sent to the Temporal Service as part of the Workflow Task Execution completion request, after the Workflow Task has progressed as far as it can with the Workflow function. There will always be [WorkflowTaskStarted](/references/events#workflowtaskstarted) and [WorkflowTaskCompleted](/references/events#workflowtaskcompleted) Events in the Event History when there is a Workflow Task Execution completion request. Commands are described in the [Command reference](/references/commands) and are defined in the [Temporal gRPC API](https://github.com/temporalio/api/blob/master/temporal/api/command/v1/message.proto). ### Status {#workflow-execution-status} A Workflow Execution can be either _Open_ or _Closed_. #### Open An _Open_ status means that the Workflow Execution is able to make progress. - Running: The only Open status for a Workflow Execution. When the Workflow Execution is Running, it is either actively progressing or is waiting on something. #### Closed A _Closed_ status means that the Workflow Execution cannot make further progress because of one of the following reasons: - Cancelled: The Workflow Execution successfully handled a cancellation request. - Completed: The Workflow Execution has completed successfully. - Continued-As-New: The Workflow Execution [Continued-As-New](/workflow-execution/continue-as-new). - Failed: The Workflow Execution returned an error and failed. - Terminated: The Workflow Execution was terminated. - Timed Out: The Workflow Execution reached a timeout limit. ### Workflow Execution Chain {#workflow-execution-chain} A Workflow Execution Chain is a sequence of Workflow Executions that share the same Workflow Id. Each link in the Chain is often called a Workflow Run. Each Workflow Run in the sequence is connected by one of the following: - [Continue-As-New](/workflow-execution/continue-as-new) - [Retries](/encyclopedia/retry-policies) - [Temporal Cron Job](/cron-job) A Workflow Execution is uniquely identified by its [Namespace](/namespaces), [Workflow Id](/workflow-execution/workflowid-runid#workflow-id), and [Run Id](/workflow-execution/workflowid-runid#run-id). The [Workflow Execution Timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout) applies to a Workflow Execution Chain. The [Workflow Run Timeout](/encyclopedia/detecting-workflow-failures#workflow-run-timeout) applies to a single Workflow Execution (Workflow Run). ## What is a Memo? {#memo} A Memo is a non-indexed set of Workflow Execution metadata that developers supply at start time or in Workflow code and that is returned when you describe or list Workflow Executions. The primary purpose of using a Memo is to enhance the organization and management of Workflow Executions. Add your own metadata, such as notes or descriptions, to a Workflow Execution, which lets you annotate and categorize Workflow Executions based on developer-defined criteria. This feature is particularly useful when dealing with numerous Workflow Executions because it facilitates the addition of context, reminders, or any other relevant information that aids in understanding or tracking the Workflow Execution. :::note Use Memos judiciously Memos shouldn't store data that's critical to the execution of a Workflow, for some of the following reasons: - Unlike Workflow inputs, Memos lack type safety - Memos are subject to eventual consistency and may not be immediately available - Excessive reliance on Memos hides mutable state from the Workflow Execution History ::: ## What is a State Transition? {#state-transition} A State Transition is a unit of progress made by a [Workflow Execution](#workflow-execution). Each State Transition is recorded in a persistence store. Some operations, such as [Activity Heartbeats](/encyclopedia/detecting-activity-failures#activity-heartbeat), require only one or two State Transitions each. With an Activity Heartbeat, there are two: the Activity Heartbeat and a Timer. Most operations require multiple State Transitions. For example, a simple Workflow with two sequential [Activity Tasks](/tasks#activity-task) (and no retries) produces 11 State Transitions: two for Workflow start, four for each Activity, and one for Workflow completion. :::tip NEXT STEPS For more information on Workflow Execution, please refer to the following subpages: - [Event](/workflow-execution/event) - [Workflow Id and Run Id](/workflow-execution/workflowid-runid) - [Limits](/workflow-execution/limits) - [Continue-as-New](/workflow-execution/continue-as-new) - [Timers and Start Delay](/workflow-execution/timers-delays) ::: --- ## Workflow Id and Run Id This page discusses the following: - [Run Id](#run-id) - [Operations leading to non-determinism](#run-id-non-determinism) - [Workflow Id](#workflow-id) - [Workflow Id Reuse Policy](#workflow-id-reuse-policy) - [Workflow Id Conflict Policy](#workflow-id-conflict-policy) Each Workflow Execution is associated with a user-defined [Workflow ID](#workflow-id), a value which typically carries some business meaning (such as an order number or customer number). Temporal guarantees that there can be at most one Workflow Execution with a given ID running at any point in time, a constraint that helps to protect against unexpected duplication. In some cases, such as when running the same Workflow at recurring intervals using the Schedules features, there can be multiple "runs" of a single Workflow Execution over a period of time. In this case, all runs will have the same Workflow ID. However, each run will have a unique system-generated [Run ID](#run-id). ## What is a Run Id? {#run-id} A Run Id is a globally unique, platform-level identifier for a [Workflow Execution](/workflow-execution). The current Run Id is mutable and can change during a [Workflow Retry](/encyclopedia/retry-policies). You shouldn't rely on storing the current Run Id, or using it for any logical choices, because a Workflow Retry changes the Run Id and can lead to non-determinism issues. Temporal guarantees that only one Workflow Execution with a given [Workflow Id](#workflow-id) can be in an Open state at any given time. But when a Workflow Execution reaches a Closed state, it is possible to have another Workflow Execution in an Open state with the same Workflow Id. For example, a Temporal Cron Job is a chain of Workflow Executions that all have the same Workflow Id. Each Workflow Execution within the chain is considered a _Run_. A Run Id uniquely identifies a Workflow Execution even if it shares a Workflow Id with other Workflow Executions. ### Which operations lead to non-determinism issues?{#run-id-non-determinism} An operation like `ContinueAsNew`, `Retry`, `Cron`, and `Reset` creates a [Workflow Execution Chain](/workflow-execution#workflow-execution-chain) as identified by the [`first_execution_run_id`](https://github.com/temporalio/api/blob/master/temporal/api/history/v1/message.proto). Each operation creates a new Workflow Execution inside a chain run and saves its information as `first_execution_run_id`. Thus, the Run Id is updated during each operation on a Workflow Execution. - The `first_execution_run_id` is the Run Id of the first Workflow Execution in a Chain run. - The `original_execution_run_id` is the Run Id when the `WorkflowExecutionStarted` Event occurs. A Workflow `Reset` changes the first execution Run Id, but preserves the original execution Run Id. For example, when a new Workflow Execution in the chain starts, it stores its Run Id in `original_execution_run_id`. A reset doesn't change that field, but the current Run Id is updated. :::caution Because of this behavior, you shouldn't rely on the current Run Id in your code to make logical choices. ::: **Learn more** For more information, see the following link. - [`message.proto`](https://github.com/temporalio/api/blob/master/temporal/api/history/v1/message.proto#L75-L82) ## What is a Workflow Id? {#workflow-id} A Workflow Id is a customizable, application-level identifier for a [Workflow Execution](/workflow-execution) that is unique to an Open Workflow Execution within a [Namespace](/namespaces). - [How to set a Workflow Id](/develop/go/temporal-client#workflow-id) A Workflow Id is meant to be a business-process identifier, such as customer identifier or order identifier. The Temporal Platform guarantees uniqueness of the Workflow Id within a [Namespace](/namespaces) based on the Workflow Id Reuse Policy. A [Workflow Id Reuse Policy](#workflow-id-reuse-policy) can be used to manage whether a Workflow Id from a Closed Workflow can be re-used. A [Workflow Id Conflict Policy](#workflow-id-conflict-policy) can be used to decide how to resolve a Workflow Id conflict with a Running Workflow. A Workflow Execution can be uniquely identified across all Namespaces by its [Namespace](/namespaces), Workflow Id, and [Run Id](#run-id). ### What is a Workflow Id Reuse Policy? {#workflow-id-reuse-policy} A Workflow Id Reuse Policy determines whether a Workflow Execution is allowed to spawn with a particular Workflow Id, if that Workflow Id has been used with a previous, and now Closed, Workflow Execution. It is not possible for a new Workflow Execution to spawn with the same Workflow Id as another Open Workflow Execution, regardless of the Workflow Id Reuse Policy. See [Workflow Id Conflict Policy](#workflow-id-conflict-policy) for resolving a Workflow Id conflict. The Workflow Id Reuse Policy can have one of the following values: - **Allow Duplicate:** The Workflow Execution is allowed to exist regardless of the Closed status of a previous Workflow Execution with the same Workflow Id. **This is the default policy, if one is not specified.** Use this when it is OK to have a Workflow Execution with the same Workflow Id as a previous, but now Closed, Workflow Execution. - **Allow Duplicate Failed Only:** The Workflow Execution is allowed to exist only if a previous Workflow Execution with the same Workflow Id does not have a Completed status. Use this policy when there is a need to re-execute a Failed, Timed Out, Terminated, or Cancelled Workflow Execution and guarantee that the Completed Workflow Execution will not be re-executed. - **Reject Duplicate:** The Workflow Execution cannot exist if a previous Workflow Execution has the same Workflow Id, regardless of the Closed status. Use this when there can only be one Workflow Execution per Workflow Id within a Namespace for the given retention period. - **Terminate if Running:** Specifies that if a Workflow Execution with the same Workflow Id is already running, it should be terminated and a new Workflow Execution with the same Workflow Id should be started. This policy allows for only one Workflow Execution with a specific Workflow Id to be running at any given time. The first three values (Allow Duplicate, Allow Duplicate Failed Only, and Reject Duplicate) of the Workflow Id Reuse Policy apply to Closed Workflow Executions that are retained within the Namespace. For example, given a default Retention Period, the Temporal Service can only check the Workflow Id of the spawning Workflow Execution based on the Workflow Id Reuse Policy against the Closed Workflow Executions for the last _30 days_. If you need to start a Workflow for a particular implementation only if it hasn't started yet, ensure that your Retention Period is long enough to check against. If this becomes unwieldy, consider using [Workflow message passing](/encyclopedia/workflow-message-passing) instead of trying to start Workflows atomically. The fourth value of the Workflow Id Reuse Policy, Terminate if Running, only applies to a Workflow Execution that is currently open within the Namespace. For Terminate if Running, the Retention Period is not a consideration for this policy. If there is an attempt to spawn a Workflow Execution with a Workflow Id Reuse Policy that won't allow it, the Server will prevent the Workflow Execution from spawning. ### What is a Workflow Id Conflict Policy? {#workflow-id-conflict-policy} A Workflow Id Conflict Policy determines how to resolve a conflict when spawning a new Workflow Execution with a particular Workflow Id used by an existing Open Workflow Execution. See [Workflow Id Reuse Policy](#workflow-id-reuse-policy) for managing the reuse of a Workflow Id of a Closed Workflow. By default, this results in a `Workflow execution already started` error. :::note The default [StartWorkflowOptions](https://pkg.go.dev/go.temporal.io/sdk/internal#StartWorkflowOptions) behavior in the Go SDK is to not return an error when a new Workflow Execution is attempted with the same Workflow Id as an Open Workflow Execution. Instead, it returns a WorkflowRun instance representing the current or last run of the Open Workflow Execution. To return the `Workflow execution already started` error, set `WorkflowExecutionErrorWhenAlreadyStarted` to `true`. ::: The Workflow Id Conflict Policy can have one of the following values: - **Fail:** Prevents the Workflow Execution from spawning and returns a `Workflow execution already started` error. **This is the default policy, if one isn't specified.** - **Use Existing:** Prevents the Workflow Execution from spawning and returns a successful response with the Open Workflow Execution's Run Id. - **Terminate Existing:** Terminates the Open Workflow Execution then spawns the new Workflow Execution with the same Workflow Id. --- ## Temporal Workflow This guide provides a comprehensive overview of Temporal Workflows and covers the following: - [Workflow Definition](/workflow-definition) - [Workflow Execution](/workflow-execution) - [Schedules](/schedule) - [Dynamic Handler](/dynamic-handler) - [Cron Job](/cron-job) ## Intro to Workflows Conceptually, a workflow defines a sequence of steps. With Temporal, those steps are defined by writing code, known as a Workflow Definition, and are carried out by running that code, which results in a Workflow Execution. In day-to-day conversations, the term Workflow might refer to Workflow Type, a Workflow Definition, or a Workflow Execution. 1. A **Workflow Definition** is the code that defines your Workflow. 2. The **Workflow Type** is the name that maps to a Workflow Definition. It's an identifier that makes it possible to distinguish one type of Workflow (such as order processing) from another (such as customer onboarding). 3. A **Workflow Execution** is a running Workflow, which is created by combining a Workflow Definition with a request to execute it. You can execute a Workflow Definition any number of times, potentially providing different input each time (i.e., a Workflow Definition for order processing might process order #123 in one execution and order #567 in another execution). It is the actual instance of the Workflow Definition running in the Temporal Platform. You'll develop those Workflows by writing code in a general-purpose programming language such as Go, Java, TypeScript, or Python. The code you write is the same code that will be executed at runtime, so you can use your favorite tools and libraries to develop Temporal Workflows. Temporal Workflows are resilient. They can run—and keep running—for years, even if the underlying infrastructure fails. If the application itself crashes, Temporal will automatically recreate its pre-failure state so it can continue right where it left off. Each Workflow Execution progresses through a series of **Commands** and **Events**, which are recorded in an **Event History**. Workflows must follow deterministic constraints to ensure consistent replay behavior. --- ## Handling Signals, Queries, & Updates When Signals, Updates, and Queries arrive at your Workflow, the handlers for these messages will operate on the current state of your Workflow and can use the fields you have set. In this section, we’ll give you an overview of how messages work with Temporal and cover how to write correct and robust handlers by covering topics like atomicity, guaranteeing completion before the Workflow exits, exceptions, and idempotency. ## Handling Messages {#handling-messages} ### Message handler concurrency {#message-handler-concurrency} If your Workflow receives messages, you may need to consider how those messages interact with one another or with the main Workflow method. Behind the scenes, Temporal is running a loop that looks like this: Every time the Workflow wakes up--generally, it wakes up when it needs to--it will process messages in the order they were received, followed by making progress in the Workflow’s main method. This execution is on a single thread–while this means you don’t have to worry about parallelism, you do need to worry about concurrency if you have written Signal and Update handlers that can block. These can run interleaved with the main Workflow and with one another, resulting in potential race conditions. These methods should be made reentrant. #### Initializing the Workflow first {#workflow-initializers} Initialize your Workflow's state before handling messages. This prevents your handler from reading uninitialized instance variables. To see why, refer to the [diagram](#message-handler-concurrency). It shows that your Workflow processes messages before the first run of your Workflow's main method. The message handler runs first in several scenarios, such as: - When using [Signal-with-Start](/sending-messages#signal-with-start). - When your Worker experiences delays, such as when the Task Queue it polls gets backlogged. - When messages arrive immediately after a Workflow continues as new but before it resumes. For all languages except Go and TypeScript, use your constructor to set up state. Annotate your constructor as a Workflow Initializer and take the same arguments as your Workflow's main method. Note that you can't make blocking calls from your constructor. If you need to block, make your Signal or Update handler [wait](#waiting) for an initialization flag. In Go and TypeScript, register any message handlers only after completing initialization. ### Message handler patterns {#message-handler-patterns} Here are several common patterns for write operations, Signal and Update handlers. They don't apply to pure read operations, i.e. Queries or [Update Validators](/handling-messages#update-validators): - Returning immediately from a handler - Waiting for the Workflow to be ready to process them - Kicking off activities and other asynchronous tasks - Injecting work into the main Workflow - Finishing handlers before the Workflow completes - Ensuring your messages are processed exactly once #### Synchronous handlers Synchronous handlers don’t kick off any long-running operations or otherwise block. They're guaranteed to run atomically. #### Waiting {#waiting} A Signal or Update handler can block waiting for the Workflow to reach a certain state using a Wait Condition. See the links below to find out how to use this with your SDK. #### Running asynchronous tasks Sometimes, you need your message handler to wait for long-running operations such as executing an Activity. When this happens, the handler will yield control back to [the loop](#message-handler-concurrency). This means that your handlers can have race conditions if you’re not careful. You can guard your handlers with concurrency primitives like mutexes or semaphores, but you should use versions of these primitives provided for Workflows in most languages. See the links below for examples of how to use them in your SDK. #### Inject work into the main Workflow {#injecting-work-into-main-workflow} Sometimes you want to process work provided by messages in the main Workflow. Perhaps you’d like to accumulate several messages before acting on any of them. For example, message handlers might put work into a queue, which can then be picked up and processed in an event loop that you yourself write. This option is considered advanced but offers powerful flexibility. And if you serialize the handling of your messages inside your main Workflow, you can avoid using concurrency primitives like mutexes and semaphores. See the links above for how to do this in your SDK. #### Finishing handlers before the Workflow completes {#finishing-message-handlers} You should generally finish running all handlers before the Workflow run completes or continues as new. For some Workflows, this means you should explicitly check to make sure that all the handlers have completed before finishing. You can await a condition called All Handlers Finished at the end of your Workflow. If you don’t need to ensure that your handlers complete, you may specify your handler’s Handler Unfinished Policy as Abandon to turn off the warnings. However, note that clients waiting for Updates will get Not Found errors if they're waiting for Updates that never complete before the Workflow run completes. See the links below for how to ensure handlers are finished in your SDK. #### Ensuring your messages are processed exactly once {#exactly-once-message-processing} Many developers want their message handlers to run exactly once--to be idempotent--in cases where the same Signal or Update is delivered twice or sent by two different call sites. Temporal deduplicates messages for you on the server, but there is one important case when you need to think about this yourself when authoring a Workflow, and one when sending Signals and Updates. When your workflow Continues-As-New, you should handle deduplication yourself in your message handler. This is because Temporal's built-in deduplication doesn't work across [Continue-As-New](/workflow-execution/continue-as-new) boundaries, meaning you would risk processing messages twice for such Workflows if you don't check for duplicate messages yourself. To deduplicate in your message handler, you can use an idempotency key. Clients can provide an idempotency key. This can be important because Temporal's SDKs provide a randomized key by default, which means Temporal only deduplicates retries from the same call. For Updates, if you craft an Update ID, Temporal will deduplicate any calls that use that key. This is useful when you have two different callsites that may send the same Update, or when your client itself may get retried. For Signals, you can provide a key as part of your Signal arguments. Inside your message handler, you can check your idempotency key--the Update ID or the one you provided to the Signal--to check whether the Workflow has already handled the update. See the links below for examples of solving this in your SDK. #### Authoring message handler patterns See examples of the above patterns. ### Update Validators {#update-validators} When you define an Update handler, you may optionally define an Update Validator: a read operation that's responsible for accepting or rejecting the Update. You can use Validators to verify arguments or make sure the Workflow is ready to accept your Updates. - If it accepts, the Update will become part of your Workflow’s history and the client will be notified that the operation has been Accepted. The Update handler will then run until it returns a value. - If it rejects, the client will be informed that it was Rejected, and the Workflow will have no indication that it was ever requested, similar to a Query handler. :::note Like Queries, Validators are not allowed to block. ::: Once the Update handler is finished and has returned a value, the operation is considered Completed. ### Exceptions in message handlers {#exceptions} When throwing an exception in a message handler, you should decide whether to make it an [Application Failure](/references/failures#application-failure). The implications are different between Signals and Updates. :::caution The following content applies in every SDK except the Go SDK. See below. ::: #### Exceptions in Signals In Signal handlers, throw [Application Failures](/references/failures#application-failure) only for unrecoverable errors, because the entire Workflow will fail. Similarly, allowing a failing Activity or Child Workflow to exhaust its retries, so that it throws an [Activity Failure](https://docs.temporal.io/references/failures#activity-failure) or [Child Workflow Failure](https://docs.temporal.io/references/failures#child-workflow-failure) will cause the entire Workflow to fail. Note that for Activities, this will only happen if you change the default Activity [Retry Policy](https://docs.temporal.io/encyclopedia/retry-policies), since by default they retry forever. If you throw any other exception, by default, it will cause a [Workflow Task Failure](/references/failures#workflow-task-failures). This means the Workflow will get stuck and will retry the handler periodically until the exception is fixed, for example by a code change. #### Exceptions in Updates Doing any of the following will fail the Update and cause the client to receive the error: - Reject the Update by throwing any exception from your [Validator](https://docs.temporal.io/handling-messages#update-validators). - Allow a failing Activity or Child Workflow to exhaust its retries, so that it throws an [Activity Failure](https://docs.temporal.io/references/failures#activity-failure) or [Child Workflow Failure](https://docs.temporal.io/references/failures#child-workflow-failure). Note that for Activities, this will only happen if you change the default Activity [Retry Policy](https://docs.temporal.io/encyclopedia/retry-policies), since by default they retry forever. - Throw an [Application Failure](/references/failures#application-failure) from your Update handler. Unlike with Signals, the Workflow will keep going in these cases. If you throw any other exception, by default, it will cause a [Workflow Task Failure](/references/failures#workflow-task-failures). This means the Workflow will get stuck and will retry the handler periodically until the exception is fixed, for example by a code change or infrastructure coming back online. Note that this will cause a delay for clients waiting for an Update result. #### Errors and panics in message handlers in the Go SDK In Go, returning an error behaves like an [Application Failure](/references/failures#application-failure) in the other SDKs. Panics behave like non-Application Failure exceptions in other languages, in that they cause a [Workflow Task Failure](/references/failures#workflow-task-failures). ### Writing Signal Handlers {#writing-signal-handlers} Use these links to see a simple Signal handler. ### Writing Update Handlers {#writing-update-handlers} Use these links to see a simple update handler. ### Writing Query Handlers {#writing-query-handlers} Author queries using these per-language guides. --- ## Sending Signals, Queries, & Updates This section will help you write clients that send messages to Workflows which includes: - [Sending Signals](#sending-signals) - [Sending Updates](#sending-updates) - [Sending Queries](#sending-queries) ### Sending Signals {#sending-signals} You can send Signals from any Temporal Client, the Temporal CLI, or you can Signal one Workflow to another. You can also Signal-With-Start to lazily initialize a Workflow while sending a Signal. #### Send a Signal from a Temporal Client or the CLI #### Send a Signal from one Workflow to another #### Signal-With-Start {#signal-with-start} Signal-With-Start is a great tool for lazily initializing Workflows. When you send this operation, if there is a running Workflow Execution with the given Workflow Id, it will be Signaled. Otherwise, a new Workflow Execution starts and is immediately sent the Signal. ### Sending Updates {#sending-updates} :::note To use the Workflow Update feature in versions prior to v1.25.0, it must be manually enabled. Set the [frontend.enableUpdateWorkflowExecution](https://github.com/temporalio/temporal/blob/main/common/dynamicconfig/constants.go) and [frontend.enableUpdateWorkflowExecutionAsyncAccepted](https://github.com/temporalio/temporal/blob/main/common/dynamicconfig/constants.go) dynamic config values to `true`. For example, with the Temporal CLI, run these commands: ```command temporal server start-dev --dynamic-config-value frontend.enableUpdateWorkflowExecution=true temporal server start-dev --dynamic-config-value frontend.enableUpdateWorkflowExecutionAsyncAccepted=true ``` ::: Updates can be sent from a Temporal Client or the Temporal CLI to a Workflow Execution. This call is synchronous and will call into the corresponding Update handler. If you’d rather make an asynchronous request, you should use Signals. In most languages (except Go), you may call `executeUpdate` to complete an Update and get its result. Alternatively, to start an Update, you may call `startUpdate` and pass in the Workflow Update Stage as an argument. You have two choices on what to await: - Accepted - wait until the Worker is contacted, which ensures that the Update is persisted. See [Update Validators](/handling-messages#update-validators) for more information. - Completed - wait until the handler finishes and returns a result. (This is equivalent to `executeUpdate`.) The start call will give you a handle you can use to track the Update, determine whether it was Accepted, and ultimately get its result or an error. If you want to send an Update to another Workflow such as a Child Workflow from within a Workflow, you should do so within an Activity and use the Temporal Client as normal. There are limits on the total number of Updates that may occur during a Workflow Execution run, and also on the number of concurrent in-progress Updates that a Workflow Execution may have. Use [Update Validators](/handling-messages#update-validators) and [Update IDs](/handling-messages#exactly-once-message-processing) to stay within the system limits in both [Cloud](/cloud/limits#per-workflow-execution-update-limits) and [Self-Hosted](/self-hosted-guide/defaults). #### Update-With-Start {#update-with-start} :::tip For open source server users, Temporal Server version [Temporal Server version 1.28](https://github.com/temporalio/temporal/releases/tag/v1.28.0) is recommended. ::: Update-with-Start sends an Update request, starting a Workflow if necessary. A [`WorkflowIDConflictPolicy`](https://docs.temporal.io/workflow-execution/workflowid-runid#workflow-id-conflict-policy) must be specified. Workflow ID and Update ID can be used as idempotency keys as follows: - If the Workflow exists and you provided an Update ID, and the Update exists in the latest Workflow Run, then Update-With-Start attaches to the existing Update (regardless of `WorkflowIDConflictPolicy`) - If the Workflow is closed, it attaches only if the Update has completed. - Otherwise it uses [`WorkflowIDConflictPolicy`](https://docs.temporal.io/workflow-execution/workflowid-runid#workflow-id-conflict-policy) and [`WorkflowIDReusePolicy`](https://docs.temporal.io/workflow-execution/workflowid-runid#workflow-id-reuse-policy) as usual to determine whether to start a Workflow, and then starts a new Update immediately. Update-With-Start is great for latency-sensitive use cases: - **Lazy Initialization** - Instead of making separate Start Workflow and Update Workflow calls, Update-With-Start allows you to send them together in a single roundtrip. For example, a shopping cart can be modeled using Update-With-Start. Updates let you add and remove items from the cart. Update-With-Start lets the customer start shopping, whether the cart already exists or they've just started shopping. It ensures the cart, modeled by a Workflow Execution, exists before applying any Update that changes the state of items within the cart. Set your `WorkflowIDConflictPolicy` to `USE_EXISTING` for this pattern. - **Early Return** - Using Update-With-Start you can begin a new Workflow Execution and synchronously receive a response, while the Workflow Execution continues to run to completion. For example, you might model a payment process using Update-With-Start. This allows you to send the payment validation results back to the client synchronously, while the transaction Workflow continues in the background. Set your `WorkflowIDConflictPolicy` to `FAIL` and use a unique Update ID for this pattern if you want to assert it does not reuse an existing Workflow. :::caution Unlike Signal-with-Start - Update-With-Start is _not_ atomic. If the Update can't be delivered, for example, because there's no running Worker available, a new Workflow Execution will still start. The SDKs will retry the Update-With-Start request, but there is no guarantee that the Update will succeed. ::: ### Sending Queries {#sending-queries} Queries can be sent from a Temporal Client or the Temporal CLI to a Workflow Execution--even if this Workflow has Completed. This call is synchronous and will call into the corresponding Query handler. You can also send a built-in "Stack Trace Query" for debugging. #### Stack Trace Query {#stack-trace-query} In many SDKs, the Temporal Client exposes a predefined `__stack_trace` Query that returns the call stack of all the threads owned by that Workflow Execution. This is a great way to troubleshoot a Workflow Execution in production. For example, if a Workflow Execution has been stuck at a state for longer than an expected period of time, you can send a `__stack_trace` Query to return the current call stack. The `__stack_trace` Query name does not require special handling in your Workflow code. :::note Stack Trace Queries are available only for running Workflow Executions. ::: --- ## Temporal Workflow message passing - Signals, Queries, & Updates Workflows can be thought of as stateful web services that can receive messages. The Workflow can have powerful message handlers akin to endpoints that react to the incoming messages in combination with the current state of the Workflow. Temporal supports three types of messages: Signals, Queries, and Updates: - Queries are read requests. They can read the current state of the Workflow but cannot block in doing so. - Signals are asynchronous write requests. They cause changes in the running Workflow, but you cannot await any response or error. - Updates are synchronous, tracked write requests. The sender of the Update can wait for a response on completion or an error on failure. ## How to choose between Signals, Updates, and Queries as a Workflow author? {#choosing-messages} This section will help you write Workflows that receive messages. ### For write requests Unlike Signals, Updates must be synchronous and must wait for the Worker running the Workflow to acknowledge the request. The following table compares when to use **Signals** versus **Updates**. | **Requirement type** | **Use Signals when...** | **Use Updates when...** | | ------------------------------ | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **Asynchronous communication** | Clients want to quickly move on after sending an asynchronous message. | Clients want to track the completion of the message. | | **Result handling** | Clients are okay with “fire and forget” — no result or exception needed. | Clients need a result or exception without performing a query. | | **Worker availability** | Clients don't depend on the Worker being available. | You want to validate the Update before accepting it into the Workflow and its history. | | **Concurrency and throughput** | You don’t want to limit the number of messages processed concurrently by a single Workflow. | You don’t need more concurrent Updates per Workflow than the allowed limits for [Cloud](/cloud/limits#per-workflow-execution-update-limits) or [Self-Hosted](/self-hosted-guide/defaults). | | **Latency sensitivity** | Since clients don’t expect a result, latency is often not relevant when using Signals. | Clients want a low-latency end-to-end operation and are willing to wait for completion or validation. | ### For read requests You normally want to do a Query, because: - Queries are efficient–they never add entries to the [Workflow Event History](/workflow-execution/event#event-history), whereas an Update would (if accepted). - Queries can operate on completed Workflows. However, because Queries cannot block, sometimes Updates are best. When your goal is to do a read once the Workflow achieves a certain desired state, you have two options: - You could poll periodically with Queries until the Workflow is ready. - You could write your read operation as an Update, which will give you better efficiency and latency, though it will write an entry to the [Workflow Event History](/workflow-execution/event#event-history). ### For read/write requests Use an Update for synchronous read/write requests. If your request must be asynchronous, consider sending a Signal followed by polling with a Query. --- ## Cloud automation - Temporal feature Temporal Cloud Automation changes the way how you manage and scale your cloud infrastructure. Its features enable you to automate critical tasks like user and namespace management, mTLS certificate rotation, and access control, ensuring security and operational efficiency. Cloud Automation offers secure authentication across all interfaces, reducing errors and enhancing security. **Key Features:** - [Secure API Keys](https://docs.temporal.io/cloud/api-keys): Manage resources securely with Temporal Cloud API Keys. - [Temporal Cloud CLI (tcld)](https://docs.temporal.io/cloud/tcld): Automate operations directly from the command line. - [Terraform Provider for Cloud](https://docs.temporal.io/cloud/terraform-provider#prerequisites): Scale effortlessly with infrastructure-as-code. From centralizing cloud operations and automating certificate rotation to streamlining user management and onboarding new teams, Temporal's Cloud Automation features covers a wide range of use cases that enhance efficiency and security across your organization. --- ## Temporal's production deployment features Transform your Temporal applications into production-ready systems by deploying your application code, Workflows, Activities, and Workers for operational use. When your application is ready to start serving production traffic, we offer two Temporal Service options: - **[Choose Temporal Cloud for your Temporal Service](/cloud)** Let us handle the Temporal Service operations so you can focus on your applications. - **[Self-host a Temporal Service](/self-hosted-guide)** Deploy your own production level Temporal Service to orchestrate your durable applications. | Feature | Temporal Cloud | Self-hosted | | ---------------------------------- | ------------------------------------------------------------------------- | ------------------------------------------------ | | **Multi-tenant** | ✅ Up to 100 Namespaces | ✅ Unlimited Namespaces | | **High availability and failover** | ✅ [Namespaces with High Availability features](/cloud/high-availability) | ✅ Global Namespaces & Multi-Cluster Replication | | **Application state persistence** | ✅ 30-90 day Retention | ✅ Unlimited | | **Long term state retention** | ✅ Workflow History Export | ✅ Archival | | **Community support** | ✅ Slack, Forum | ✅ Slack, Forum | | **Paid support** | ✅ Prioritized responses | ✖️ | --- ## Core application - Temporal feature **Workflows**, **Activities**, and **Workers** form the core parts of a Temporal Application. **Workflows**: A Workflow defines the overall flow of the application. You write it in your programming language of choice using the Temporal SDK. Conceptually, a Workflow specifies a sequence of steps and orchestrates the execution of Activities. **Activities**: An Activity is a method or function that encapsulates business logic prone to failure (e.g., calling a service that may go down). The system can automatically retry these Activities upon some failures. Activities perform a single, well-defined action, such as calling another service, transcoding a media file, or sending an email message. **Workers**: A Worker executes your Workflow and Activity code. **Follow one of our tutorials to [Get started](https://learn.temporal.io/getting_started/) learning how to develop Workflows and Activities and run them in Worker Processes.** Or jump straight to a Temporal SDK feature guide: For a deep dive into Temporal Workflows, Activities, and Workers, visit the following Temporal Encyclopedia pages or enroll in one of [our courses](https://learn.temporal.io/courses/). - [Temporal Workflows](/workflows) - [Temporal Activities](/activities) - [Temporal Workers](/workers) --- ## Data encryption - Temporal feature Data Converters in Temporal are SDK components that handle the serialization and encoding of data transmitted and received by a Temporal Client. Workflow input and output need to be serialized and deserialized so they can be sent as JSON to the Temporal Service. Temporal provides its own default Data Converter logic, which is not apparent to a user if payloads contain plain text or JSON data. For enhanced security, you can implement your own encryption standards using a Codec Server. Temporal's data encryption capabilities ensures the security and confidentiality of your Workflows and provides protection without compromising performance. Jump straight to a Temporal SDK feature guide. --- ## Debugging - Temporal feature Temporal offers powerful and efficient debugging capabilities for both development and production. These capabilities help developers inspect and troubleshoot Workflows and Activities with precision, ensuring that Workflows perform as expected. By leveraging detailed event histories and intuitive tooling, you can trace the execution path of Workflows, identify issues, and understand the state of your application at any given point in time. Jump straight to a Temporal SDK feature guide. --- ## Failure detection - Temporal feature In Temporal, timeouts detect application failures. The system can then automatically mitigate these failures through retries. Both major application function primitives, **Workflows** and **Activities**, have dedicated **timeout configurations** and can be configured with a **Retry Policy**. **Follow one of our tutorials to [Get started](https://learn.temporal.io/getting_started/) exploring timeouts and Retry Policies.** Or jump straight to a Temporal SDK feature guide. For a deep dive into timeouts and Retry Policies visit the following Temporal Encyclopedia pages or enroll in one of [our courses](https://learn.temporal.io/courses/). --- ## Temporal development and production features Through a Temporal SDK, Temporal provides a wide range of features that enable developers to build applications that serve a wide range of use cases. - **[Core application primitives](/evaluate/development-production-features/core-application)**: Develop and run your application with Workflows, Activities, and Workers. - **[Testing suite](/evaluate/development-production-features/testing-suite)**: Each Temporal SDK comes with a testing suite that enables developers to test their applications as they would any other. - **[Scheduled Workflows](/evaluate/development-production-features/schedules)**: Start a business process at a specific time or on a given time interval. - **[Interrupt a Workflow](/evaluate/development-production-features/interrupt-workflow)**: Cancel or terminate a business process (Workflow) that is already in progress and compensate for any steps already taken. - **Runtime safeguards**: Prevent avoidable errors and issues from executing during runtime. - **[Failure detection and mitigation](/evaluate/development-production-features/failure-detection)**: Detect failures with timeouts and configure automatic retries to mitigate them. - **[Temporal Nexus](/evaluate/nexus)**: Connect Temporal Applications across (and within) isolated Namespaces for improved modularity, security, debugging, and fault isolation. Nexus supports cross-team, cross-domain, and multi-region use cases. - **[Workflow message passing](/evaluate/development-production-features/workflow-message-passing)**: Build responsive applications that react to events at runtime and enable data retrieval from ongoing Workflows. - **Versioning**: Support multiple versions of your business logic for long-running business processes. - **[Observability](/evaluate/development-production-features/observability)**: List business process, view their state, and set up dashboards with metrics. - **[Debugging](/evaluate/development-production-features/debugging)**: Surface errors and step through code to find issues. - **[Data encryption](/evaluate/development-production-features/data-encryption)**: Transform data and protect the privacy of the users of your application. - **[Throughput composability](/evaluate/development-production-features/throughput-composability)**: Breakup business processes by data streams, team ownership, or other organization factors. - **[Cloud Automation](/evaluate/development-production-features/cloud-automation)**: Simplify cloud management and boost security with Temporal's Cloud Automation. - **[Low Latency](/evaluate/development-production-features/low-latency)**: Making your applications faster, more performant, and more efficient. - **[Multi-tenancy](/evaluate/development-production-features/multi-tenancy)**: Enhances efficiency and cost-effectiveness. For detailed information on Temporal feature release stages and criteria, see this [Product Release Stages Guide](/evaluate/development-production-features/release-stages). --- ## Interrupt a Workflow - Cancellation and Termination Discover how Temporal enables you to gracefully handle Workflow interruptions through cancellations and terminations. Understand how to stop a Workflow cleanly with cancellation, allowing for proper cleanup and state management. For situations where a Workflow is stuck, termination provides an immediate solution, ensuring your applications remain robust and responsive. --- ## Low latency - Temporal feature Temporal Cloud provides features that significantly reduce latency compared to self-hosted instances, making your applications faster, more performant, and more efficient. In the world of modern applications, low latency is crucial for ensuring minimal delay in Workflow Executions. This low-latency architecture ensures rapid Workflow Execution and responsiveness, critical for time-sensitive applications and high-performance systems. Temporal Cloud's custom persistence layer incorporates three key components that contribute to low latency: - **Better Sharding:** Distributes load across multiple databases, preventing bottlenecks. Enables independent resizing, improving scalability and handling high-traffic events without delay. - **Write-Ahead Log (WAL):** Aggregates updates before writing to the database, reducing write latency. Stores writes in an append-only format, reducing latency and database size by batching updates before writing to the database. - **Tiered Storage of Workflow Event History:** Offloads completed Workflow Event Histories, improving database efficiency. Temporal Cloud provides lower latency, making it suitable for latency-sensitive, large-scale, or business-critical applications. --- ## Multi-tenancy - Temporal feature Multi-tenancy in Temporal operates at two levels: ## Namespace isolation [Namespaces](/namespaces) are Temporal's unit of isolation, providing logical separation for multi-tenant deployments in both open source Temporal and Temporal Cloud. ### Open source Temporal Namespaces in self-hosted Temporal provide: - **Workflow ID uniqueness**: Temporal guarantees unique Workflow IDs within a Namespace. Different Namespaces can have Workflows with the same ID without conflict. - **Resource isolation**: Traffic from one Namespace does not impact other Namespaces on the same Temporal Service. - **Configuration boundaries**: Settings like [Retention Period](/temporal-service/temporal-server#retention-period) and [Archival](/temporal-service/archival) destination are configured per Namespace. - **Access control**: Use a custom [Authorizer](/self-hosted-guide/security#authorization) on your Frontend Service to restrict who can access each Namespace. - **Inter-namespace communication**: Use [Nexus](/evaluate/nexus) for controlled communication between Namespaces. ### Temporal Cloud Temporal Cloud builds on these capabilities with additional isolation guarantees: - **Independent authentication** via [API keys](/cloud/api-keys) or [mTLS certificates](/cloud/certificates) - **Built-in [role-based access controls](/cloud/users#namespace-level-permissions)** without custom Authorizer configuration - **Separate [rate limits](/cloud/limits#namespace-level)** to prevent noisy neighbor problems - **[High availability replication](/cloud/high-availability)** across regions ## Application multi-tenancy Many organizations use Temporal to power their own multi-tenant SaaS applications, isolating their customers' workloads using Task Queues, Search Attributes, and Worker design patterns. See the [multi-tenant application patterns guide](/production-deployment/multi-tenant-patterns) for detailed recommendations on architecting multi-tenant applications with Temporal. --- ## Observability - Temporal feature Temporal's observability feature helps you track the state of your Workflows in real-time, providing tools for detailed metrics, tracing, comprehensive logging, and visibility into your application state. Monitor performance, trace Activity and Workflow Executions, debug, and filter Workflow Executions to gain deeper insights into your Workflows. **Key Components of Temporal's Observability and Visibility** - **Metrics**: Detailed performance metrics to track the health and efficiency of your Temporal Service and Workflows. - **Tracing**: End-to-end tracing of Workflow and Activity Executions to understand the flow and timing of operations. - **Logging**: Comprehensive logging capabilities for debugging and auditing purposes. - **Search Attributes**: Custom attributes that can be used to enhance searchability and provide additional context to Workflow Executions. - **Web UI**: A user-friendly interface for visualizing and interacting with your Workflows and Temporal Service state. **Benefits of Temporal's Observability and Visibility Features** - **Real-time Monitoring**: Track the state and progress of your Workflows as they execute. - **Performance Optimization**: Identify bottlenecks and optimize your Workflow and Activity implementations. - **Effective Debugging**: Quickly locate and diagnose issues in your Temporal applications. - **Compliance and Auditing**: Maintain detailed records of all Workflow executions for compliance and auditing purposes. - **Operational Insights**: Gain a deep understanding of your application's behavior and usage patterns. - **Scalability Management**: Monitor and manage the scalability of your Temporal Service effectively. Jump straight into the Temporal SDK feature guide. --- ## Temporal product release stages guide :::tip CHANGELOG To stay up-to-date with the latest feature changes, visit the [changelog](https://temporal.io/change-log). ::: This Product Release Stages Guide provides an understanding of how Temporal features are released. It describes and lists the criteria for each release stage, so that you can make informed decisions about the adoption of each new feature. Product Release Guide Expectations: | | Pre-release | Public Preview | General Availability | | ------------------------------- | ------------------------------------------------------------------ | ------------------------------------------------------------------------------------------- | ------------------------------------------------ | | **Features access** | Self-hosted Temporal users: Everyone; Temporal Cloud: Invite only. | Everyone. Temporal Cloud may limit the number of users being onboarded to ensure stability. | Everyone. | | **Feature completeness** | Limited functionality. | Core functionality is complete. | Mature and feature complete. | | **API stability** | Experimental; API is subject to change. | API breaking changes are kept to a minimum. | API is stable. | | **Feature region Availability** | Limited regions. | Most regions. | All [regions](/cloud/regions). | | **Feature support** | Community and engineering team. | [Formal support](/cloud/support#support-ticket). | [Formal support](/cloud/support#support-ticket). | | **Feature recommended usage** | Experimental. | Production use cases. | Production usage. | | **Feature Cloud pricing** | No additional cost. | Pricing changes are kept to a minimum. | Pricing is stable. | | **Feature Interoperability** | Limited. | Features are compatible with each other, unless otherwise stated. | Features are compatible with each other. | ## Pre-release {#pre-release} **Access:** Most Pre-release features are released in the open source Temporal software and are publicly available. However, some features which are explicit to hosting Temporal Services, such as [API Keys](/cloud/api-keys), may be specific to Temporal Cloud. In Temporal Cloud, Pre-release features are invite-only: Temporal will work directly with a group of existing Temporal Cloud customers to be part of testing of each Pre-release feature. These customers are invited to provide feedback to the Temporal team. **Classification:** New features in Pre-release may not be fully mature and may have bugs. Users acknowledge and agree that Pre-release features are provided on an “as-is” basis, and that they are provided without any indemnification, support, warranties, or representation of any kind. **Feedback:** Feedback is highly encouraged and important for guiding Temporal feature development. We encourage you to share your experience so that you can influence the future direction of Temporal. **Availability:** Temporal may modify features before they become Generally Available, or may even decide to remove them. This means there is no guarantee that a new feature will become Generally Available. A Pre-release feature can be deprecated at any time. Pre-release features may be disabled by default, and can be enabled via configuration. Temporal Cloud customers can contact the Temporal account team or [Temporal Support Team](/cloud/support#support-ticket) to gain Pre-release access. ## Public Preview {#public-preview} **Access:** New features in Public preview are available to everyone. **Classification:** Features in public preview may undergo further development and testing before they are made Generally Available. These features are being refined and are recommended for production usage. **Feedback:** Temporal users are invited to share feedback via the [Community Slack](http://t.mp/slack), by reaching out directly to the Temporal team at product@temporal.io, or by creating issues in the relevant [GitHub repository](https://github.com/temporalio). Temporal also encourages Temporal Cloud users to submit feedback via [support ticket](/cloud/support#support-ticket). This feedback will assist in guiding the improvements for General Availability. **Availability:** New Features in Public Preview may evolve. The APIs may undergo changes; however, Temporal's goal is to maintain backward compatibility. ## General Availability {#general-availability} **Access:** Features in General Availability are available to everyone. **Classification:** The feature is now fully developed, tested, and available for use without further anticipated changes. **Feedback:** Temporal users are invited to share feedback via the [Community Slack](http://t.mp/slack), by reaching out directly to the Temporal team at product@temporal.io, or by creating issues in the relevant [GitHub repository](https://github.com/temporalio). **Availability:** Features in General Availability are released with stable APIs and recommended for production use with a committed SLA. :::info Exceptions There may be exceptions for different features, but this is the typical expectation. Any variation will be documented. ::: --- ## Schedules - Temporal feature Temporal Schedules is a feature that allows you to "schedule" Temporal Workflows at specified times or intervals, adjusting for peak use. It offers a flexible way to automate and manage your Temporal Workflows, ensuring your business processes run smoothly and efficiently especially when handling time-sensitive tasks. 1. **Automate Repetitive Tasks:** Schedules automate repetitive tasks, reducing manual intervention and ensuring timely execution of business processes. 2. **Enhanced Workflow Control and Observability:** Gain complete control over your automation processes. With Schedules, you can create, backfill, delete, describe, list, pause, trigger, and update Workflow Executions. 3. **Flexible Timing:** Schedule Workflow Executions to run at regular intervals or specific future times, ensuring they execute precisely when needed. 4. **Reliable and Scalable:** Designed for reliability and scalability, Temporal Schedules handle the complexities of distributed systems while ensuring your Workflows run as intended, even during failures. 5. **Eliminate External Dependencies:** Schedules remove the need to integrate external scheduling systems. Jump straight to a Temporal SDK feature guide. --- ## Temporal Nexus - Temporal feature :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability) for [Temporal Cloud](/cloud/nexus) and [self-hosted deployments](/production-deployment/self-hosted-guide/nexus). ::: ## Connect Temporal Applications Nexus allows you to connect Temporal Applications across (and within) isolated Namespaces. This provides all the benefits of Durable Execution across team and application boundaries with improved modularity, security, debugging, and fault isolation. Nexus supports cross-team, cross-domain, cross-namespace, multi-region, and multi-cloud use cases. ## Why use Nexus? Unlike other forms of inter-service communication, Nexus combines a familiar programming model with the resiliency of the Temporal Platform and its queue-based Worker architecture. ### Benefits - **Integrated Temporal experience** \- with improved security, observability, and reliability. - **Microservice contracts** \- suitable for sharing across teams or domains. - **Abstract and share underlying Temporal primitives** \- like Workflows, Signals, or Updates. - **At-least-once execution guarantees** \- with support for exactly-once execution using Workflow policy. - **Improved security and blast-radius isolation** \- with separate Namespaces for each team or domain. - **Modular design** \- for streamlined multi-team development. - **Custom handlers** \- that execute arbitrary code. - **No error-prone boilerplate code** \- with Temporal SDK support to build and use Nexus Services. - **Same queue-based Worker architecture** \- so no bespoke service deployments are needed. ### Use cases - **Cross-team, cross-domain, and cross-namespace** \- Nexus is purpose-built to connect Temporal Applications within and across Namespaces. It addresses the limitations of Child Workflows, Activity Wrappers, and bespoke APIs that target a remote Namespace; such as leaking implementation details, second-class observability, overly-permissive security, and error-prone boilerplate code. Nexus has a streamlined Temporal developer experience, reliable execution, and integrated observability. - **Share a subset of a Temporal Application** \- Abstract and share a subset of an Application as a Nexus Service. Nexus Operations can span any length of execution, be synchronous or asynchronous, and be implemented with Temporal primitives, like Workflows, Signals, or Updates. Expose Services on a Nexus Endpoint for others to use and secure them with access control policies. Nexus Endpoints decouple callers from handlers, so teams can operate more autonomously. - **Modular design for growth** \- Temporal Nexus enables a modular application design that can evolve as you grow. Start with Nexus Services in a monolithic Namespace and move Services to separate Namespaces with small configuration changes. - **Smaller failure domains** \- When teams operate in the same monolithic Namespace, everything is available to everyone, and mis-behaving Workers can trigger rate limits that affect all teams operating in that monolithic Namespace. Nexus enables each team to have their own Namespace for improved security, troubleshooting, and fault isolation. - **Multi-region** \- Nexus requests in Temporal Cloud are routed across a global mTLS-secured Envoy mesh within and across AWS and GCP. Built-in Nexus Machinery provides reliable at-least-once execution and Workflow policy can deduplicate requests for exactly-once execution, even across multi-region boundaries. ### Key features - **Familiar developer experience** \- Temporal SDKs provide an integrated way to build and use Nexus Services. - Use Nexus Services from a caller Workflow. - Run Nexus Service handlers in a Worker, often the same Worker as underlying Temporal primitives. - Implement long-running asynchronous Nexus Operations as Workflows. - Handle low-latency synchronous Nexus Operations with Temporal primitives or arbitrary code. - Execute Operations with at-least-once semantics by default, and exactly-once semantics using Workflow ID reuse policies. - **Nexus Endpoints with a queue-based Worker architecture** \- Nexus Endpoints are a reverse proxy for Nexus Services. - Connect callers and handlers through Nexus Endpoints, for looser coupling. - Manage Endpoints in the Nexus Registry using the UI, CLI, or Cloud Ops API. - Use a Nexus Endpoint by name, which routes requests to an upstream target Namespace and Task Queue. - Handle Nexus requests in a Nexus Worker by polling an Endpoint's target Task Queue, with automatic load balancing. - Streamline operations by running Nexus Services in existing queue-based Workers. - **Built-in Nexus Machinery** \- Execution guarantees are provided with built-in Nexus Machinery. - Execute Nexus Operations with reliable state-machine-based invocation and completion callbacks. - Guarantee atomic handoff from Workflow Event History to Nexus Operation state machines. - Ensure reliable execution with automatic retries, rate limiting, concurrency limiting, and circuit breaking. - **Integrated observability** \- Execution debugging and observability is integrated into the Temporal Platform. - View Nexus Operation lifecycle and error info in Workflow Event History. - Debug across Namespaces with bi-directional linking. - Generate metrics, traces, and logs. - **Improved blast radius isolation** \- Separate Namespaces isolate underlying Workers and sensitive Workflow state. - Limit direct access to a Namespace, while exposing Nexus Endpoints for others to use. - Isolate misbehaving Workers that affect rate limits for all Workers in a Namespace. - Avoid leaking Workflow implementation details to external callers. - **Enhanced security and connectivity** \- Temporal Cloud provides integrated Nexus access controls and multi-region routing. - Connect Applications across Namespaces in an Account with Temporal's private mTLS-secured Envoy mesh. - Multi-region connectivity within and across AWS and GCP. - Restrict which callers can use a Nexus Endpoint, with built-in Endpoint access controls. - Stream audit logs including Nexus Registry actions to create, update, or delete Endpoints. ## Learn more To connect with the Nexus community, join the [#nexus](https://temporalio.slack.com/archives/C07LQN0JK9B) channel in [Temporal Slack](https://t.mp/slack). --- ## Temporal Testing Suite - Temporal feature In the context of Temporal, you can create these types of automated tests: 1. End-to-end: Running a Temporal Server and Worker with all its Workflows and Activities; starting and interacting with Workflows from a Client. 2. Integration: Anything between end-to-end and unit testing. Running Activities with mocked Context and other SDK imports (and usually network requests). Running Workers with mock Activities, and using a Client to start Workflows. Running Workflows with mocked SDK imports. 3. Unit: Running a piece of Workflow or Activity code and mocking any code it calls. Jump straight to a Temporal SDK feature guide. --- ## Child Workflows - Temporal feature In Temporal, **Child Workflows** enable applications to achieve another level of composability when it comes to throughput. The following example scenarios are a few reasons to use this feature: - To create a separate service that can be invoked from multiple other services or applications. - To partition a step into smaller chunks. - To manage a dedicated resource and guarantee uniqueness. - To execute logic periodically without overwhelming the parent business process. See the SDK feature guides for implementation details: For a deep dive into Child Workflows see the [Child Workflows Encyclopedia page](/child-workflows). --- ## Workflow message passing - Temporal feature Need to interact with your Workflow from outside of it? Think about use cases like these: - Your shipment-tracking Workflow needs to know when the item leaves the warehouse and is loaded into their truck. **Signal** your Workflow when the truck driver scans the barcode. - Folks in your company want to track the progress of their data migration Workflows. **Query** your running batch Workflow to get the data for the progress bar. - Your eCommerce shopping cart Workflow needs to know when a new item is added. **Update** it to add the item and receive back the current items to render. Temporal provides Signals, Queries, and Updates to allow rich interactivity with your running Workflows. **Signals**: Signal to send messages asynchronously to a running Workflow, changing its state or controlling its flow in real-time. **Queries**: Query to check the progress of your Workflow or debug the internal state in real-time. **Updates**: Update to send synchronous requests to your Workflow and track it in real-time. To learn more about using these powerful primitives, see our encyclopedia Entry: For a deeper dive into Workflow message passing, enroll in one of [our courses](https://learn.temporal.io/courses/interacting_with_workflows). If you want to jump to straight to implementation details, see the SDK feature guides. --- ## Evaluate Temporal Temporal is designed to make developing distributed applications a delightful experience. Developers benefit from a clear approach to structure their code and visibility into the state of their application. Applications benefit from fault-tolerance and execution guarantees. Thousands of companies of all sizes are leveraging Temporal's capabilities for both mission critical and standard workloads. - [Why Temporal](/evaluate/why-temporal) - [Development and production features](/evaluate/development-production-features) - [Use cases](/evaluate/use-cases-design-patterns) - [Temporal Cloud](/cloud) - [Security](/security) --- ## Temporal Cloud Actions Actions track both the progress of a workflow (such as Workflow Start, Schedule Started, Workflow reset) and broader capabilities enabled by Temporal Cloud. Temporal Cloud Actions are the primary unit of consumption-based pricing for Temporal Cloud. They track billable operations within the Temporal Cloud Service, such as starting Workflows, recording a Heartbeat, or sending messages. The following result in an action on Temporal Cloud: ## Workflows - **Workflow started**. Occurs via client start, [Continue-As-New](/workflow-execution/continue-as-new), [Child Workflow](/child-workflows) start. If a Workflow start fails, an Action is not recorded. De-duplicated Workflow starts that share a Workflow ID do _not_ count as an Action. - **Workflow reset**. Occurs when a [Workflow](/workflows) is reset. (Actions that occur before a [Reset](/workflow-execution/event#reset) are counted even if they are no longer visible in [Event History](/workflow-execution/event#event-history).) - **Timer started**. Includes implicit Timers that are started by a Temporal SDK when timeouts are set, such as `AwaitWithTimeout` in Go or `condition` in TypeScript. - **Search Attribute upsert requested**. Occurs for each invocation of `UpsertSearchAttributes` command. Multiple Search Attributes updated in a single `UpsertSearchAttributes` command count as one Action. Search Attributes specified during Workflow start are _excluded_ from Action counts. The `TemporalChangeVersion` Search Attribute, used for Workflow versioning, is also exempt from Action counting. - **Signal sent**. An Action occurs for every [Signal](/sending-messages#sending-signals), whether sent from a Client or from a Workflow. Also, one total action occurs for any [Signal-With-Start](/sending-messages#signal-with-start), regardless of whether the Workflow starts. - **Query received by Worker**. An Action occurs for every [Query](/sending-messages#sending-queries), including viewing the call stack in the Temporal Cloud UI, which results in a Query behind the scenes. - **Update received by Worker**. An Action occurs for every successful [Update](/sending-messages#sending-updates) and every [rejected](/handling-messages#update-validators) Update. This includes [Update-With-Start](/sending-messages#update-with-start), and is in addition to the start Action in the case when the Workflow starts as well. De-duplicated Updates that share an Update ID do _not_ count as an Action. - **Side Effect recorded**. For a mutable [Side Effect](/workflow-execution/event#side-effect), an Action occurs only when the value changes. - **Workflow Execution Options updated.** An Action occurs for every [Workflow-Execution-Options-Updated](/references/events#workflowexecutionoptionsupdated) event. This includes attaching a Workflow completion callback or modifying a Workflow versioning override. ## Child Workflows - **Start Child Workflow** and **Child Workflow Execution**. When the parent Workflow durably records the intent to start a Child Workflow, it results in two Actions, one for starting the Workflow, and another for the attempted Execution. ## Activities - **Activity started or retried**. Occurs each time an Activity is started or retried. - **Local Activity started**. All [Local Activities](/local-activity) associated with one Workflow Task count as a single Action. That's because Temporal Cloud counts all [RecordMarkers](/references/commands#recordmarker) from each Workflow Task as one action, and not _N_ actions. Please note: - Each additional Workflow Task heartbeat after counts as an additional Action. - Local Activities retried following a Workflow Task heartbeat count as one Action (capped at 100 Actions). - **Activity Heartbeat recorded**. A Heartbeat call from Activity code counts as an Action only if it reaches the [Temporal Server](/temporal-service/temporal-server). Temporal SDKs throttle [Activity Heartbeats](/encyclopedia/detecting-activity-failures#activity-heartbeat). The default throttle is 80% of the [Heartbeat Timeout](/encyclopedia/detecting-activity-failures#heartbeat-timeout). Heartbeats don't apply to Local Activities. ## Schedules [Schedules](/schedule) allows you to schedule a Workflow to start at a particular time. Each execution of a Schedule accrues three actions: - **Schedule Start**. This accounts for two actions. - **Workflow started**. This is a single action to start the target Workflow. It includes initial Search Attributes as part of the start request. ## Export [Workflow History Export](/cloud/export) enables you to export closed Workflow Histories to a cloud storage sink of your choice. - **Workflow exported**. Each Workflow exported accrues a single action. - Excluded from APS calculations. ## Temporal Nexus - For [Nexus Operation scheduled](/references/events#nexusoperationscheduled), the caller Workflow starting a Nexus Operation results in one Action on the caller Namespace. - For [Nexus Operation canceled](/references/events#nexusoperationcanceled), the caller Workflow canceling a Nexus Operation results in one Action on the caller Namespace. - The underlying Temporal primitives (such as Workflows, Activities, and Signals) created by a Nexus Operation handler (directly or indirectly) result in the normal Actions for those primitives billed to the handler’s Namespace. This includes retries for underlying Temporal primitives like Activities but _not_ for handling the Nexus Operation itself or a retry of the Nexus Operation itself. ## Capacity - For Namespace Capacity Temporal Resource Units (TRUs), Actions are generated up to the included hourly allocation for TRUs in any hour where TRUs are set and actual usage falls beneath the included hourly Action allocation. - Excluded from APS calculations. ## Usage Actions usage is tracked across an account in the [usage dashboard](https://cloud.temporal.io/usage) and is visible to Account Owners, Finance Admin and Global Admin. For individual namespaces, usage can be seen in the [namespace summary](https://cloud.temporal.io/namespaces/) for a specific namespace. ![Temporal Cloud Usage dashboard](/img/cloud/billing/usage-dashboard.png) ## Actions in Workflows When viewing a Workflow history, events that represent a Billable Action are annotated with the number consumed by the event in the **Billable Actions** Column. These Actions are summarized at the top of the workflow. ![Temporal Cloud Usage dashboard showing aggregated Billable Actions](/img/cloud/billing/aggregate-billable-actions.png) ![Temporal Cloud Usage dashboard showing individual Billable Actions associated with events](/img/cloud/billing/individual-billable-actions.png) This Billable Action estimate is useful for projecting the cost of workflows. For example, if you ran a test workflow that generated 20 Billable Actions and projected that it would be run 100 times a day for a month, you could anticipate that workflow to generate 20 Actions x 100 runs/day x 30 days = 60,000 Billable Actions per month. You can also use the Billable Action estimate to help optimize workflows by better understanding your cost drivers. :::tip Excluded Billable Actions The Billable Action estimate is an experimental feature and only measures Billable Actions that exist within workflow event histories. Some billable concepts are not included in these calculations such as: - Query - Activity Heartbeats - Rejected Update Workflow Executions - Export - Schedule Additionally, Workflows with the `TemporalNamespaceDivision` Search Attribute set may not have accurate Billable Action Estimates. The estimated Billable Actions should only be treated as an estimate. If billable events exist outside of event history, the Actions count could be higher. ::: [Reach out](https://pages.temporal.io/contact-us) to our team for more information or to help size your number of Actions. --- ## System limits - Temporal Cloud Temporal Cloud enforces a variety of limits to keep the service reliable, including rate limits (how often something may occur in a unit of time), resource limits (how many of a given resource may exist at any one time), and configuration limits (minimum or maximum values for a setting) Every limit applies at a specific scope (level of the application): - At the Temporal Cloud [Account level](#account-level) - At the [Namespace level](#namespace-level) - At the [Nexus endpoint level](#nexus-endpoint-level) - Within the [programming model](#programming-model-level) itself ## Account level The following limits apply at the Temporal Cloud Account level (per account). ### Users - Scope: Account - Default limit: 300 users - How to increase: [Contact support](/cloud/support#support-ticket) ### Namespaces - Scope: Account - Default limit: 10 namespaces - How to increase: - Automatically increased as you start creating namespaces, up to a limit of 100. - [Contact support](/cloud/support#support-ticket) ## Namespace level The following limits apply at the Namespace level. ### Actions per second - Scope: Namespace - Default limit: 500 actions per second (APS) - How to increase: - Automatically increases (and decreases) based on the last 7 days of APS usage. Will never go below the default limit. - See [Capacity Modes](/cloud/capacity-modes). - [Contact support](/cloud/support#support-ticket). See the [Actions page](/cloud/actions) for the list of actions. ### Requests per second - Scope: Namespace - Default limit: 2000 requests per second (RPS) - How to increase: - Automatically increases (and decreases) based on the last 7 days of RPS usage. Will never go below the default limit. - See [Capacity Modes](/cloud/capacity-modes). - [Contact support](/cloud/support#support-ticket). See the [glossary](/glossary#requests-per-second-rps) for more about RPS. ### Operations per second - Scope: Namespace - Default limit: 4000 operations per second (OPS) - How to increase: - Automatically increases (and decreases) based on the last 7 days of OPS usage. Will never go below the default limit. - See [Capacity Modes](/cloud/capacity-modes). - [Contact support](/cloud/support#support-ticket). See the [operations list](/references/operation-list) for the list of operations. ### Schedules rate limit - Scope: Namespace - Default limit: 10 schedule requests per second (RPS) - How to increase: [Contact support](/cloud/support#support-ticket) To avoid throttling, don't schedule all your Workflow Executions to start at the same time (daily, weekly, monthly, etc.). Every Temporal SDK supports jittering, which adds small random delays to Schedule specifications, helping to reduce load at any specific moment. Set the `jitter` value to the largest delay you will permit before your Workflow Execution must begin. This approach uniformly distributes the scheduled Workflow Execution launches through that period and reduces your Schedule Workflow Execution RPS load. ### Visibility API Rate Limit - Scope: Namespace - Default limit: 30 Visibility API calls per second - Not configurable All read calls are subject to the Visibility API rate limit. ### Nexus Rate Limit {#nexus-rate-limits} Nexus requests (such as starting a Nexus Operation or sending a Nexus completion callback) are counted as part of the overall Namespace RPS limit. If too many Nexus requests are sent at once, they may be throttled, along with other requests to the Namespace. Throttling limits the rate at which Nexus requests are processed, ensuring the RPS limit isn't exceeded. You can request this limit be manually raised by [opening a support ticket](https://docs.temporal.io/cloud/support#support-ticket). :::note For the target Namespace of a Nexus Endpoint, even though there are no Action results for handling a Nexus Operation itself, the Nexus requests on a target Namespace do count towards the overall RPS limit for the Namespace as a whole. ::: ### Certificates Temporal Cloud limits each Namespace to a total of 32 KB or 16 certificates, whichever is reached first. ### Concurrent Task pollers Temporal Cloud limits each Namespace to 20,000 Activity pollers and 20,000 Workflow Task pollers concurrently. Each SDK offers a way to configure Workers for per-Worker maximum Activity and Workflow Task pollers. Those values do not affect the global Namespace limit. ### Default Retention Period The [Retention Period](/temporal-service/temporal-server#retention-period) is set per Namespace. Temporal Cloud sets the default Retention Period to 30 days. This is configurable in the Temporal Web UI. [Navigate to your list of Namespaces](https://cloud.temporal.io/namespaces), choose the Namespace you want to update, and select edit: You can set the Retention Period between 1 and 90 days. ### Batch jobs A Namespace can have just one [Batch job](/cli/batch) running at a time. Each batch job operates on a maximum of 50 Workflow Executions per second. ### Number of Custom Search Attributes There is a limit to the number of custom Search Attributes per attribute type per Namespace: | Search Attribute type | Limit | | --------------------- | ----- | | Bool | 20 | | Datetime | 20 | | Double | 20 | | Int | 20 | | Keyword | 40 | | KeywordList | 5 | | Text | 5 | ### Custom Search Attribute names When creating custom Search Attributes in Temporal Cloud, the attribute names must adhere to the following constraints: - Maximum characters: 64 - Allowed characters: `[a-zA-Z0-9.,:-_\/@ ]`. For more information on custom Search Attributes see [Custom Search Attributes limits](/search-attribute#custom-search-attribute). ## Nexus Endpoint level ### Nexus Endpoints limits By default, each account is provisioned with 100 Nexus Endpoints. You can request further increases beyond the initial 100 Endpoint limit by [opening a support ticket](/cloud/support#support-ticket). ## Programming model level The following limits apply at the programming model level. See also: [Self-hosted Temporal Service defaults](/self-hosted-guide/defaults). ### Identifier length limit Identifiers, such as Workflow Id, Workflow Type, and Task Queue names, are limited to a maximum length of 1,000 bytes. Note that Unicode characters may use multiple bytes. ### Per message gRPC limit Each gRPC message received has a limit of 4 MB. This limit applies to all gRPC endpoints across the Temporal Platform. ### Event History transaction size limit An Event History transaction encompasses a set of operations such as initiating a new Workflow, scheduling an Activity, processing a Signal, or starting a Child Workflow. These operations create Events that are then logged in the Event History. The transaction size limit restricts the total size of Events that can be accommodated within a single transaction. The size limit for any given [Event History](/workflow-execution/event#event-history) transaction is 4 MB. This limit is non-configurable for Temporal Cloud. ### Transaction Payload size limit Blob size limit for Payloads, including Workflow context and each Workflow and Activity argument and return value: - The max payload for a single request is 2 MB. - The max size limit for any given [Event History](/workflow-execution/event#event-history) transaction is 4 MB. This limit is non-configurable for Temporal Cloud. The [BlobSizeLimitError guide](/troubleshooting/blob-size-limit-error) provides solutions for handling large payloads. ### Per Workflow Execution concurrency limits If a Workflow Execution has 2,000 incomplete Activities, Signals, Child Workflows, or external Workflow Cancellation requests, additional [Commands](/workflow-execution#command) of that type will fail to be applied to that Workflow Execution: - `ScheduleActivityTask` - `SignalExternalWorkflowExecution` - `StartChildWorkflowExecution` - `RequestCancelExternalWorkflowExecution` For optimal performance, limit concurrent operations to 500 or fewer. This reduces Workflow's Event History size and decreases the loading time in the Web UI. ### Per Workflow Execution Signal limit A single Workflow Execution may receive up to 10,000 Signals. After that limit is reached, no more Signals will be processed for that Workflow Execution. ### Per Workflow Execution Update limits A single Workflow Execution can have a maximum of 10 in-flight Updates and 2000 total Updates in History. ### Workflow Execution Event History limits As a precautionary measure, a Workflow Execution's Event History is limited to 51,200 Events or 50 MB. It warns you after 10,240 Events or 10 MB. This limit applies to all Temporal Workflow Executions, whether on Temporal Cloud or other deployments. This limit is non-configurable for Temporal Cloud. Read more about [Temporal Workflow Execution limits](/workflow-execution/limits) on the [Temporal Workflow](/workflows) documentation page. ### Per Workflow Callback limits A single Workflow Execution can have a maximum of 32 total Callbacks. These limits may be exceeded when [multiple Nexus callers attach to the same handler Workflow](/nexus/operations#attaching-multiple-nexus-callers). See the Nexus Encyclopedia entry for [additional details](/workflow-execution/limits#workflow-execution-callback-limits). ### Per Workflow Nexus Operation limits {#per-workflow-nexus-operation-limits} A single Workflow Execution can have a maximum of 30 in-flight Nexus Operations. See the Nexus Encyclopedia entry for [additional details](/workflow-execution/limits#workflow-execution-nexus-operation-limits). ### Nexus Operation request timeout {#nexus-operation-request-timeout} Less than 10 seconds is the maximum duration for a Nexus handler to process a single Nexus start or cancel request. The timeout is measured from the calling History Service and the request must go through matching, so the available time for a handler to respond is often much less than 10 seconds. Handlers should observe the context deadline and ensure they don't exceed it. This includes fully processing a synchronous Nexus operation and starting an asynchronous Nexus operation, for example one that starts a Workflow. If a Nexus handler doesn’t process a start or cancel request within 10 seconds, it will receive a context deadline exceeded error, and the caller will retry, with an exponential backoff, for the ScheduleToClose duration for the overall Nexus Operation. This has a default and maximum as defined below in [Nexus Operation duration limits](/cloud/limits#nexus-operation-duration-limits). ### Nexus Operation duration limits {#nexus-operation-duration-limits} Each Nexus Operation has a maximum ScheduleToClose duration of 60 days. This is most applicable to asynchronous Nexus Operations completed with an asynchronous callback using a separate Nexus request from the handler back to the caller Namespace. For enhanced security, you may sign completion callbacks with a single-use token in the future, and the 60 day maximum allows you to rotate the asymmetric encryption keys used for completion callback request signing. While the caller of a Nexus Operation can configure the ScheduleToClose duration to be shorter than 60 days, the maximum duration can not extend beyond 60 days and capped by the server to 60 days. ### Timer duration limit Timers have a maximum duration of 100 years in Temporal Cloud. ## Worker Versioning level ### Max Worker deployments limits {#max-worker-deployments-limits} The maximum number of Worker deployments that the server allows to be registered in a single Namespace. Defaults to 100. ### Max versions in deployment limits {#max-versions-in-deployment-limits} The maximum number of versions that the server allows to be registered in a single Worker deployments at a given time. Note that unused versions will be deleted by the system automatically when this limit is reached. Defaults to 100. ### Max Task Queues In Deployment Version limits {#max-task-queues-in-deployment-version-limits} The maximum number of Task Queues that the server allows to be registered in a single Worker Deployment Version. Defaults to 100. --- ## Overview - Temporal Cloud Temporal Cloud is a fully managed durable execution platform. It handles the complexity of running Temporal at scale—persistence, replication, upgrades, and availability—so you can focus on building applications. Your code runs in your environment. Temporal Cloud never sees your application logic or sensitive data. The platform stores encrypted Workflow state and orchestrates execution, while your Workers execute business logic wherever you deploy them. ## How Temporal Cloud works Temporal Cloud operates as the control plane for your distributed applications: 1. **Your environment**: You run Workers that execute your Workflow and Activity code. These can be deployed anywhere—Kubernetes, VMs, serverless, on-premises. 2. **Temporal Cloud**: Manages Workflow state, Event History, task queuing, and scheduling. All data is encrypted in transit and at rest. 3. **Temporal SDKs**: Your applications use the SDK to communicate with Temporal Cloud over secure gRPC connections. This separation means Temporal Cloud scales independently of your application. You control compute resources for your Workers; Temporal handles the orchestration layer. ## Architecture ### Cell-based infrastructure Temporal Cloud uses a cell-based architecture to achieve isolation and scalability. Each cell is a self-contained deployment unit with its own: - Dedicated cloud account and VPC - Kubernetes cluster running Temporal services - Primary database with synchronous replication across three availability zones - Elasticsearch for Workflow visibility and search - Load balancers and ingress management - Observability and certificate infrastructure Cells act as failure domains. If infrastructure within a cell experiences issues, only Namespaces in that cell are affected. This design limits blast radius and enables independent scaling. ### Data plane and control plane **Data plane**: Where your Workflows execute. Each cell processes Workflow operations, persists state, and manages task queues. The data plane is optimized for low latency and high throughput. **Control plane**: Manages provisioning, configuration, and lifecycle operations. When you create a Namespace, the control plane: 1. Selects an appropriate cell in your chosen region 2. Provisions database resources and roles 3. Generates and deploys mTLS certificates 4. Configures ingress routes and validates connectivity The control plane uses Temporal itself (durable execution) to orchestrate these operations reliably. ### Multi-cloud availability Temporal Cloud runs on both AWS and GCP: - **14 AWS regions** spanning North America, Europe, Asia Pacific, and South America - **5 GCP regions** in North America, Europe, and Asia Pacific You can create Namespaces in any supported region. For disaster recovery, you can replicate across regions within a cloud provider or across cloud providers entirely. See [Service regions](/cloud/regions) for the complete list of available regions. ## Built-in reliability Every Temporal Cloud Namespace includes baseline high availability: - **Three-zone replication**: Workflow state synchronously replicates across three availability zones before acknowledging writes - **Automatic failover**: If one zone becomes unavailable, operations continue on the remaining zones - **99.9% SLA**: Contractual uptime guarantee for standard Namespaces ### High Availability features For workloads requiring stronger guarantees, Temporal Cloud offers three replication options: | Deployment | Description | Use case | |------------|-------------|----------| | **Same-region** | Replicate across isolated cells within one region | Single-region applications needing cell-level isolation | | **Multi-region** | Replicate across regions within one cloud provider | Geographic redundancy and compliance requirements | | **Multi-cloud** | Replicate across cloud providers (AWS ↔ GCP) | Maximum resilience against provider-level outages | High Availability Namespaces include: - **99.99% SLA**: Four-nines contractual uptime guarantee - **Sub-1-minute RPO**: Recovery Point Objective for data loss - **20-minute RTO**: Recovery Time Objective for failover completion - **Automatic or manual failover**: Choose your preferred failover strategy See [High Availability](/cloud/high-availability) for configuration details. ## Security model Temporal Cloud implements defense-in-depth security: ### Your code stays with you Temporal Cloud never executes your application code. Workers run in your environment, connecting to Temporal Cloud over encrypted channels. You control access to your compute resources and secrets. ### Client-side encryption The [Data Converter](/dataconversion) lets you encrypt payloads before they leave your Workers. Temporal Cloud stores ciphertext—if the service were compromised, your data remains encrypted. Deploy a [Codec Server](/production-deployment/data-encryption) to decrypt data in the Web UI without sharing keys. ### Network isolation - **mTLS authentication**: Per-Namespace certificate-based authentication for gRPC endpoints - **API key authentication**: Alternative to certificates for simpler key management - **Private connectivity**: AWS PrivateLink and GCP Private Service Connect for traffic that never traverses the public internet ### Compliance Temporal Technologies maintains SOC 2 Type 2 certification and complies with GDPR and HIPAA regulations. Audit logs capture all API operations and can be exported to your security monitoring systems. See [Security model](/cloud/security) for complete details. ## Consumption-based pricing Temporal Cloud charges based on what you use: ### Actions The primary billing unit. Actions are billable operations like starting Workflows, sending Signals, recording Heartbeats, and completing Activities. Pricing starts at $50 per million Actions with volume discounts as you scale. ### Storage - **Active Storage**: Event History for running Workflows - **Retained Storage**: Event History for completed Workflows (configurable retention period up to 90 days) ### Plans Four tiers—Essentials, Business, Enterprise, and Mission Critical—with increasing support levels, included Actions/Storage, and features like SAML and SCIM. The Essentials plan starts at $100/month. Self-serve signup and plan management available at [cloud.temporal.io](https://cloud.temporal.io). See [Pricing](/cloud/pricing) for detailed rates and examples. ## Portability Temporal Cloud runs the same Temporal Server as the open-source distribution. This means: ### Zero code changes Applications built for self-hosted Temporal work on Temporal Cloud without modification. Update your connection configuration to point at your Cloud Namespace—that's it. ### Zero-downtime migration [Automated migration](/cloud/migrate/automated) uses Workflow replication to move running Workflows from self-hosted to Cloud (or between Cloud regions) without interruption. No Workflow restarts, no data loss, no downtime. [Manual migration](/cloud/migrate/manual) works by updating Clients and Workers to use new Namespace endpoints while existing Workflows complete naturally. ### Bidirectional Move workloads from self-hosted to Cloud, Cloud to self-hosted, or between Cloud regions and providers. The same migration tooling works in any direction. ## Self-serve operations Temporal Cloud is designed for self-service: - **Web UI**: Create Namespaces, manage users, configure settings at [cloud.temporal.io](https://cloud.temporal.io) - **CLI (`tcld`)**: Automate operations from the command line - **Terraform provider**: Infrastructure-as-code for Namespaces, users, and configuration - **Cloud Ops API**: Programmatic access for custom tooling and automation No support tickets required for standard operations. ## Getting started 1. [Sign up for Temporal Cloud](https://temporal.io/get-cloud) 2. [Create your first Namespace](/cloud/namespaces) 3. [Connect your Workers](/cloud/get-started#set-up-your-clients-and-workers) 4. [Run your first Workflow](/cloud/get-started#run-your-first-workflow) For existing Temporal users, see [Migration](/cloud/migrate) to move self-hosted workloads to Cloud. --- ## Temporal Cloud pricing Temporal Cloud is a consumption-based service. You pay only for what you use. Our pricing reflects your use of [_Actions_](#action), [_Storage_](#storage), and [_Support_](/cloud/support#support). It is flexible, transparent, and predictable, so you know your costs. This page describes the elements of Temporal Cloud pricing. It gives you the information you need to understand and estimate costs for your implementation. For more exact estimates, please reach out to [our team](https://pages.temporal.io/ask-an-expert). Billing and cost information is available directly in the Temporal Cloud UI. For more information, visit the [Billing and Cost](/cloud/billing-and-cost) page. ## Temporal Cloud pricing model {#pricing-model} This section explains the basis of the Temporal Cloud pricing model and how it works. Your total invoice each calendar month is the combination of Temporal Cloud consumption ([Actions](#action) and [Storage](#storage)), and a [Temporal Cloud Plan](#base_plans) that includes [Support](/cloud/support#support). ### Temporal Cloud Plans {#base_plans} **How plans work** Each Temporal Cloud account includes a plan with Support, Actions, Active Storage, Retained Storage and platform features. Base allocations help you get started with the Temporal platform, so you can better estimate costs. - Temporal Cloud Plans are charged monthly. - Action and Storage allocations are reset each calendar month. Temporal offers four plans: Essential, Business, Enterprise, Mission Critical. Prices are outlined in the following table: | | Essentials | Business | Enterprise | Mission Critical | | ----------------- | ------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | | Support Targeting | Basic use | Production deploymentsthat scale | Enterprise deploymentsw/ stringent uptime demands | Mission-critical applicationsw/ the highest support needs | | Support Features | Access to Support | P0 Response Times | P0: \<30 Min, 24/7Private Slack | P0: \<15 Min, 24/7Private SlackDedicated DSE | | Product Features | 1M Actions1 GB Active Storage40 GB Retained Storage | Commit discountsSAML includedSCIM (Add-on)2.5M Actions2.5 GB Active Storage100 GB Retained Storage | Commit discountsSAML includedSCIM included10M Actions10 GB Active Storage400 GB Retained Storage | Commit discountsSAML includedSCIM included10M Actions10 GB Active Storage400 GB Retained Storage | | Plan Pricing | Greater of$100/mo or5% of Usage Spend | Greater of$500/mo or10% of Usage Spend | Priced annually: [contact Sales](mailto:sales@temporal.io) for details | Priced annually: [contact Sales](mailto:sales@temporal.io) for details | | Usage Pricing | [Pay-As-You-Go](#payg) Pricing | Choose from [Pay-As-You-Go](#payg) or [Commitment Pricing](#commitment-pricing) | Choose from [Pay-As-You-Go](#payg) or [Commitment Pricing](#commitment-pricing) | Choose from [Pay-As-You-Go](#payg) or [Commitment Pricing](#commitment-pricing) | Please note, partial months are prorated to the day. Find a complete description of Support offerings and response times in our [Support](/cloud/support) documentation. :::note Converting GB to GBH Active and Retained Storage allocations are translated into GBh at a rate of 1GB equals 744GBh. ::: ### Actions {#action} **What are Temporal Actions?** Actions are the primary unit of consumption-based pricing for Temporal Cloud. They track billable operations within the Temporal Cloud Service, such as starting Workflows, recording a Heartbeat or sending messages. **Specific Billable Actions are discussed on the [Actions](/cloud/actions) page.** [Reach out](https://pages.temporal.io/contact-us) to our team for more information or to help size your number of Actions. ### Storage {#storage} **How Workflow Storage works** A Workflow's execution might exist for a few seconds, a day, month, or even forever. The Temporal Service stores the Workflow Execution's [Event History](/workflow-execution/event#event-history). Under this framework, a Workflow Execution has only two states, open (Active Storage) or closed (Retained Storage). - _Active Storage_ measures the amount of storage used by active Workflows. - When the execution of a Workflow finishes, Temporal Cloud stores Event History for a defined [Retention Period](/temporal-service/temporal-server#retention-period), which is set by the user per Namespace. This is _Retained Storage_. Typical uses of Retained Storage include compliance, debugging, workload refresh, and business analytics. When closed Workflow Histories need to be retained for more than the 90-day maximum period on Temporal Cloud, we recommend using our [**Export**](/cloud/export) feature. Storage costs are measured in gigabyte-hours (GBh). ### Pricing options {#pricing-options} **How to Pay for Temporal Cloud** After you exceed your Actions and Storage allocations in your base tier, Temporal Cloud offers two payment options: Pay-As-You-Go and Commitments. Both models meter and bill for three primary components: [Actions](#action), [Storage](#storage), and [your Temporal Cloud Plan](/cloud/support#support). - With Pay-As-You-Go, you are invoiced each calendar month based on your consumption. Pay-As-You-Go pricing automatically applies volume prices as your Actions scale. - With Commitments, you pre-purchase your Temporal Cloud spend with Temporal Credits. Temporal Credits pay for your Temporal Cloud consumption, including Temporal Cloud Plan charges. ## Pay-As-You-Go {#payg} **How does Pay-As-You-Go pricing work?** Pay-As-You-Go pricing is based on consumption. This section explains how you're billed each calendar month and gives examples. ### Action pricing {#payg-action-pricing} Actions pricing starts at $50 per million Actions ($0.00005 per Action). You gain progressive volume discounts as you scale. Discounts are based on your account's total usage, metered and billed for each calendar month: | Actions | Price per Million Actions | | --------------------- | ----------------------------------------------------------------------------------------- | | First 5M | $50 | | Next 5M, up to 10M | $45 | | Next 10M, up to 20M | $40 | | Next 30M, up to 50M | $35 | | Next 50M, up to 100M | $30 | | Next 100M, up to 200M | $25 | | Over 200M | Contact [Sales](mailto:sales@temporal.io) for info _More discounts, helpful humans_ | **Example** If you consume 11.25M Actions in excess of your Temporal Cloud Plan allocation in one calendar month, your bill for Actions will be: ``` 5M Actions ⨉ $50 Per Million Actions = $250 5M Actions ⨉ $45 Per Million Actions = $225 1.25M Actions ⨉ $40 Per Million Actions = $50 Actions $250 (First Tier) + $225 (Second Tier) + $50 (Third Tier) = $525 ``` ### Storage pricing {#payg-storage-pricing} Most accounts’ storage needs are met by our Temporal Cloud Plans. For additional storage within a calendar month, you are billed for Active and Retained Storage as follows: | **Storage** | **Price per GBh (USD)** | | ----------- | ----------------------- | | Retained | $0.00105 | | Active | $0.042 | :::tip Storage costs are also affected by Temporal System Workflows that back features such as: - [Schedules](https://docs.temporal.io/schedule): Each Scheduled Workflow contributes to storage usage. Supplied inputs, outputs, and failures all account for the storage usage incurred from Scheduled Workflows. - [Batch jobs](https://docs.temporal.io/cli/batch): Batch Workflow executions also consume storage. These Workflow executions contribute to overall active and retained storage consumption. ::: **Example** If you have 720 GBh of Active Storage and 3,600 GBh of Retained Storage in excess of your Base Tier allocations in one calendar month, your bill will be: ``` 720 GBh Active Storage ⨉ $0.042 per GBh = $30.24 3,600 GBh Retained Storage ⨉ $0.00105 per GBh = $3.78 Total Storage Bill: $30.24 Active Storage + $3.78 Retained Storage = $34.02 ``` ## Temporal Cloud Plan pricing Your Temporal Cloud Plan pricing is the greater of the minimum monthly price or a percent (%) of your consumption spend: - The Essentials tier is priced at the greater of $100/month or 5% of your Temporal Cloud consumption. - The Business tier is priced at the greater of $500/month or 10% of Temporal Cloud consumption. - The Enterprise and Mission Critical Support plans must be paid annually. Contact [Sales](mailto:sales@temporal.io) to discuss your needs. Your Temporal Cloud consumption combines the costs of Actions and Storage. **Example** If you are signed up for Essentials, with $3,000 of monthly spend, your bill will be: ``` Greater of $100 or 5% ⨉ $3,000 = $150, so $150. ``` ## Commitment Pricing {#commitment-pricing} **Commitments with Temporal Credits** Temporal Cloud offers the option to commit to a minimum spend over a given timeframe. In exchange for this commitment you receive additional discounts. Key discount levers include: - Account Action volume over 200M Actions - Duration of your commitment (1, 2, or 3 years) Meet your commitments with any Temporal Cloud spend, including Actions, Storage and your Temporal Cloud Plan. After making a commitment, Temporal locks in your Actions price based on your expected volume and discounts your Active Storage costs. This price is used to bill your Actions and Active Storage across your account for the timeframe specified in your commitment. Commitments must be paid for with Temporal Credits. Temporal Credits are used to pay your Temporal Cloud consumption, including Temporal Cloud Plan charges. A Temporal Credit is equivalent to $1 USD. For example, a credit purchase of $20,000 results in 20,000 Temporal Credits. A minimum credit purchase equivalent to the first year of your commitment is required. For multi-year deals please contact [Sales](mailto:sales@temporal.io) for the most accurate pricing. ### Commitment Pricing Q&A **How do multi-year commitments work?** Our sales team works with you to match annual credit purchases in line with your expected spend. This aligns your payments to annual terms rather than one up-front expense. **What happens if I exhaust my commitment-based Temporal Credits before the end of my term?** You continue to receive the negotiated discounted prices for the remainder of your term. You'll be invoiced for another credit purchase based on your most recent calendar month's spend. This amount is multiplied by the months remaining in the annual portion of your term. If your previous month spend was $5,000, and you're 10 months through your annual term, you'll be invoiced for 10,000 Temporal Credits to cover the remaining two months. **What happens if I have unused Temporal Credits at the end of my term?** Commitments can be difficult to estimate. Temporal Cloud offers two ways to roll-over unused credits: - When you renew a commitment for the same or larger amount, Temporal Cloud rolls over any unused credits into your new commitment. - Should you need to downsize your commitment, Temporal Cloud rolls over up to 10% of your initial credit purchase amount into the new commitment. **How do I make a commitment and purchase Temporal Credits?** Contact our team at [sales@temporal.io](mailto:sales@temporal.io) or reach out to your dedicated account manager. You can also purchase Temporal Cloud commitments credits through [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-xx2x66m6fp2lo). ### Credit Balance Your Credit Balance is adjusted each calendar month based on your Credit Usage. **Example** An account has purchased credits with an annual spend commitment of $72,000. In the first calendar month, the account is billed $500 for a Business plan, $5,000 for Actions, $250 for Active Storage and $50 for Retained Storage, a total of $5800. Their invoice would state: ``` Beginning credit balance of 72,000 - 5800 credits used = 66,200, Temporal Credits remaining. ``` If you have additional questions about credits and volume-based pricing, please contact [sales](mailto:sales@temporal.io) or, if you're already a Temporal Cloud customer, reach out to your dedicated account executive. ## Other pricing {#other-pricing} Temporal Cloud has additional pricing for other elements of the platform. ### Capacity Modes {#capacity-modes-pricing} **What are Capacity Modes?** Temporal offers On-Demand and Provisioned Capacity modes. On-Demand capacity is automatically adjusted based on past usage. Provisioned Capacity modes lets you define the capacity that is needed by your Workflow and is useful to handle traffic outside of the standard on-demand limits. See details on how capacity is set and the associated limits at [Capacity Modes](/cloud/capacity-modes). :::tip Support, stability, and dependency info Provisioned Capacity is currently in [pre-release](/evaluate/development-production-features/release-stages#pre-release). Please contact your AE or Support to enable this feature. ::: **How does pricing for Capacity Modes work?** The number of Actions accrued can be impacted by your capacity mode. For On-Demand Capacity Mode, Actions are accrued as usual. For Provisioned Capacity there is a minimum number of Actions that must be used per hour for each [Temporal Resource Unit (TRU)](/cloud/capacity-modes) that is provisioned. If Action volume in an hour does not exceed the minimum allocation, Actions of the subtype `ns_capacity:tru` will be recorded to the required volume. The minimum requirement for the first TRU is 0 Actions as it aligns with the default rate limits on Temporal. For each additional TRU, there is a minimum hourly requirement of 360,000 Actions Per Hour. 360,000 Actions per hour represents a 20% utilization of requested resources. This can be calculated as 500 APS Per TRU * 3600 Seconds (1 Hour) * 20%= 360,000 Actions. For example: If you have a namespace that requests 4 TRUs (ie 2,000 APS) for 4 hours and have usage as follows: * Hour 1: 2,000,000 Actions * Hour 2: 5,000,000 Actions * Hour 3: 4,000,000 Actions * Hour 4: 500,000 Actions Then the cost of Provisioned Capacity would be calculated as: 4 TRUs Requested - 1 Default TRU included = 3 TRUs with a minimum usage requirement 3 TRUs * 360,000 Actions per Hour= 1,080,000 minimum Actions per Hour * Hour 1: 2,000,000 Actions > 1,080,000 minimum Actions = No additional Actions accrued * Hour 2: 5,000,000 Actions > 1,080,000 minimum Actions = No additional Actions accrued * Hour 3: 4,000,000 Actions > 1,080,000 minimum Actions = No additional Actions accrued * Hour 4: 500,000 Actions < 1,080,000 minimum Actions so 1,080,000-500,000 Actions = 580,000 additional actions will be added to the hour Total for 4 hours: 2,000,000+5,000,000+4,000,000+1,080,000=12,080,000 Actions It TRUs are changed multiple times within an hour, the highest value within that hour will be used to calculate the minimum actions required. Temporal’s approach to pricing Provisioned Capacity aligns with our goal to only charge for what you use. The minimum hourly allocation will have no impact on your price if you utilize the requested capacity above your default by 20% or more, and will only incur an additional charge if the requested capacity is not used. To avoid being charged for provisioned capacity it is advised to "return" capacity to the pool and switch back to the On-demand mode when usage is stable. ### High Availability feature pricing {#high-availability-features} **How does the pricing for High Availability (HA) features work?** For workloads with stringent high availability requirements, Temporal Cloud provides same-region, multi-region, and multi-cloud replicas, which add a failover capability. Enabling HA features for Namespace automatically replicates Workflow Execution data and metadata to a replica in the same region or in a different region. This allows for a near-seamless failover when incidents or outages occur. The pricing for High Availability features aligns with the volume of your workloads. Actions and Storage in your Namespace contribute to your Actions and Storage consumption. To estimate costs for this deployment model, apply a 2x multiplier to the Actions and Storage in the Namespace you are replicating and include this scaling in your account’s consumption. :::tip Future Pricing Update To align with cloud provider network traffic pricing, we are introducing adjustments for multi-region and multi-cloud replication. Your plan will include a generous base allocation, with additional pay-per-use charges for any network usage beyond that threshold. Most multi-region and multi-cloud customers will not be impacted by this change. ::: When upgrading an existing Namespace, some points to consider: - Temporal won't charge for historical Actions completed prior to upgrading to a Namespace with High Availability features. Only ongoing (in-flight) and new Workflow Executions will generate consumption. - Temporal charges for all Actions of existing (ongoing) and new Workflows from the point of adding a replica. - Temporal charges for Replicated Storage of retained (historical), running (ongoing), and new Workflow Executions from the point of adding a new replica. ### SCIM and SSO via SAML pricing {#sso-and-saml} **What costs are associated with SSO/SAML use?** Single sign-on (SSO) integration using SAML is included for all customers on the Business, Enterprise, and Mission Critical Plans. **What costs are associated with SCIM?** To enable SCIM (System for Cross-domain Identity Management), you need an Enterprise or Mission Critical plan or an add-on for the Business plan ($500/month). Please note that you must configure SSO via SAML to use SCIM. ### Use case cost estimates {#pricing-estimates} Temporal Cloud uses a consumption-based pricing model based primarily on [Actions](#action) and [Storage](#storage). Each workload is different. You can estimate the cost of a specific Workflow by running it at a low volume. Use the resulting Storage and compute measurements to project your production scale cost. The examples below provide general estimates based on workload size. You can also use our calculator on the pricing page to build your estimate. Our team is always happy to [help you estimate costs](https://pages.temporal.io/contact-us) for your specific workloads and requirements. | Workload size | Cost (monthly) | Characteristics | Actions | Typical use cases | | ------------- | -------------- | ------------------------------------------ | -------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | | Small | < $50.00 | Modest / transient throughput | < 1M / month  _(< 0.38 actions per second)_ | General automation Human dependent processes  Data pipelines Nightly batch processes | | Medium | < $2K | Steady or burst throughput | < 40M / month  _(< 15 actions per second)_ | Transaction & order systems Infrastructure automation Payment Processing Batch processes | | Large | < $15K | Sustained throughput or multiple use cases | < 400M / month  _(< 150 actions per second)_ | Data processing / sync Retail order system KYC & fraud detection | | Web Scale | $20K+ | "Web scale" and / or numerous use cases | 1B+ / month  _(400+ actions per second)_ | Social media application SaaS application service | ## Billing Questions FAQs {#pricing-faq} **What payment methods does Temporal accept?** You can pay with a credit card, ACH, or wire transfer. To pay for Temporal Cloud with an AWS Account, sign up for [Temporal Cloud Pay-As-You-Go](https://aws.amazon.com/marketplace/pp/prodview-xx2x66m6fp2lo) on the AWS Marketplace. **How often will I be billed?** Temporal Cloud issues invoices for the previous month’s usage and costs. Invoices are issued on the 3rd of the month for the previous month. For example, invoices for May will be issued at midnight UTC on June 3rd. **Where can I view my usage and billing information?** Account Owners and Finance Admins can view their detailed billing data at any time. Visit the [Usage and Billing dashboards](/cloud/billing-and-cost) in Temporal Cloud. **How do I purchase Temporal Cloud credits?** You can purchase Temporal Cloud credits by contacting our team at [sales@temporal.io](mailto:sales@temporal.io). **What's the minimum cost to run Temporal Cloud?** The Essentials plan starts at $100/month. Consumption in excess of your plan's allocations are billed on a consumption basis. **Can I purchase Temporal Cloud through my Amazon, Azure, or Google Cloud Platform Marketplace?** There are two ways to purchase Temporal Cloud through AWS Marketplace: - Pay-As-You-Go available [here](https://aws.amazon.com/marketplace/pp/prodview-xx2x66m6fp2lo) - Credits: available via private offer, please contact our team at [sales@temporal.io](mailto:sales@temporal.io) To purchase Temporal Cloud on the Google Cloud Marketplace, please contact our team at [sales@temporal.io](mailto:sales@temporal.io). **How do I see how many Temporal Cloud credits are remaining?** To view remaining Temporal Cloud credits, Account Owners and Finance Admins can log in to Temporal Cloud and go to Settings > Billing. You need appropriate administrative permissions to access this section. **What happens if I exceed my available credits under a promotion such as the startup program?** Customers with free credits from the startup program or from a promotion are invoiced once their credit balance is exhausted at the end of that month. **Do promotional credits expire?** Credits received through the startup program or an offer have an expiry date. This date is stated as part of the sign-up process. **How do I update my payment information?** Account Owners and Finance Admins can update payment information at any time on the Temporal Cloud [Billing](https://cloud.temporal.io/billing) page under the Plan tab. You need appropriate administrative permissions to access this section. Select the "Manage Payment Method" button. See [this overview](/cloud/billing-and-cost) for more details. **What happens if my payment fails?** Temporal will periodically send you email reminders to complete the payment. **How do I view my invoices and billing history?** Invoices are emailed to Account Owners or the designated billing contacts. Account Owners and Finance Admins can view their [detailed billing information](https://cloud.temporal.io/billing) at any time. See our [billing and cost](/cloud/billing-and-cost) page for details. You need appropriate administrative permissions to access this section. Alternatively, to view invoices and billing history, contact Temporal Finance at [ar@temporal.io](mailto: ar@temporal.io). **Does Temporal charge sales tax/VAT?** We charge applicable sales tax in US jurisdictions as required. **How do I cancel my account?** Account Owners can delete their account and cancel their subscription in the Plans tab in the billing center. See the [billing and cost](/cloud/billing-and-cost) page for details on how to access the billing center. **Will I lose access immediately if I cancel my account?** Customers lose access to Temporal Cloud once Temporal completes the off-boarding process. Billing is independent of this process. **Can I reactivate my account after cancellation?** No. When your account is canceled, your account data is deleted and cannot be restored. To return to Temporal Cloud, you must sign up again. We will assign you a new Temporal account and consider you as a new customer. --- ## Service regions - Temporal Cloud You can access Temporal Cloud from anywhere with Internet connectivity, no matter where your Temporal Cloud Namespaces are physically located. Your applications can live in the cloud environment or data center of your choice. With that in mind, you _will_ reduce latency by creating Namespaces in a region close to where you host your Workers. This page enumerates the current regions supported by Temporal Cloud Namespaces. :::tip Service Availability Visit [status.temporal.io](https://status.temporal.io) to check the status of our supported regions. On that page, you can also subscribe to updates to receive email notifications whenever Temporal creates, updates or resolves an incident. ::: ### AWS Service Regions Temporal Cloud operates in the following Amazon Web Services (AWS) regions: ### GCP Service Regions Temporal Cloud operates the following Google Cloud (GCP) regions: --- ## Service availability - Temporal Cloud The operating envelope of Temporal Cloud includes throughput, latency, and limits. Service regions are listed on [this page](/cloud/regions). If you need more details, [contact us](https://pages.temporal.io/contact-us). ## Throughput expectations {#throughput} **What kind of throughput can I get with Temporal Cloud?** Each Namespace in Temporal has a rate limit, which is measured in [Actions](/cloud/pricing#action) per second. Temporal offers two different modes for adjusting capacity: On-Demand Capacity or Provisioned Capacity. With On-Demand Capacity, Namespace capacity is increased automatically along with usage. With Provisioned Capacity, you can control your capacity limits by requesting Temporal Resource Units (TRUs). :::tip Support, stability, and dependency info Provisioned Capacity is currently in [pre-release](/evaluate/development-production-features/release-stages#pre-release). Please contact your AE or Support to enable this feature. ::: ## Latency Service Level Objective (SLO) {#latency} **What kind of latency can I expect from Temporal Cloud?** Temporal Cloud has a p99 latency SLO of 200ms per region. The same SLO for normal Worker requests (commands and polling) apply to Nexus in both the caller and handler Namespaces. ### Historical latency data Latency over a week-long period for starting and signaling Workflow Executions was as follows: #### January 2026 | Operation | p50 | p90 | p99 | | :--------------------------------- | :----: | :--: | ---: | | `StartWorkflowExecution` | 14ms | 21ms | 69ms | | `SignalWorkflowExecution` | 11ms | 19ms | 46ms | | `SignalWithStartWorkflowExecution` | 19ms | 37ms | 95ms | #### March 2024 | Operation | p90 | p99 | | :--------------------------------- | :--: | ---: | | `StartWorkflowExecution` | 24ms | 54ms | | `SignalWorkflowExecution` | 14ms | 40ms | | `SignalWithStartWorkflowExecution` | 24ms | 61ms | Latency observed from the Temporal Client is influenced by other system components like the Codec Server, egress proxy, and the network itself. Also, concurrent operations on the same Workflow Execution may result in higher latency. --- ## Service Level Agreement (SLA) - Temporal Cloud **What is Temporal Cloud's Service Level Agreement? SLA?** Temporal Cloud provides two availability levels: the [service availability](https://en.wikipedia.org/wiki/Reliability,_availability_and_serviceability) and the contractual [service level agreement](https://en.wikipedia.org/wiki/Service-level_agreement) (SLA). These levels are set by your deployment mode: - **Temporal Cloud with standard single-region deployment**: Standard Temporal Cloud deployment provides 99.99% availability and a contractual service level agreement (SLA) of 99.9% guarantee against service errors. - **Temporal Cloud with High Availability feature Namespace deployment**: Temporal Cloud Namespaces that use the High Availability feature provide 99.99% availability and contractual service level agreement (SLA) of 99.99% guarantee against service errors. The same SLA for normal Worker requests (commands and polling) apply to Nexus in both the caller and handler Namespaces. To calculate the service-error rate, Temporal Cloud captures all requests that arrive in a Namespace during a five-minute interval. We record the number of gRPC service errors that occurred. For each Namespace, we calculate the service-error rate as 1 - (count of errors / count of requests). Rates are averaged per month and reset quarterly. Errors are recorded against the SLA are service errors, such as the `UNAVAILABLE` [gRPC status code](https://grpc.github.io/grpc/core/md_doc_statuscodes.html). The following errors are _not_ counted against the SLA: - `ClientVersionNotSupported` - `InvalidArgument` - `NamespaceAlreadyExists` - `NamespaceInvalidState` - `NamespaceNotActive` - `NamespaceNotFound` - `NotFound` - `PermissionDenied` - `QueryFailed` - `RetryReplication` - `StickyWorkerUnavailable` - `TaskAlreadyStarted` - `Throttling (resources exhausted; triggers retry)` - `WorkflowExecutionAlreadyStarted` - `WorkflowNotReady` Our internal alerting system is based on a [service level objective](https://en.wikipedia.org/wiki/Service-level_objective) (SLO) for all errors, not just errors that count against the SLA. When we receive an alert that an SLO is not being met, we page our on-call engineers, which often means that issues are resolved before they become noticeable. Internally, our components are distributed across a minimum of three availability zones per region. We implement a cell architecture. Each cell contains the software and services necessary to host a Namespace. Within each cell, the components are distributed across a minimum of three availability zones per region. For current system status and information about recent incidents, see [Temporal Status](https://status.temporal.io). --- ## Services, support, and training - Temporal Cloud Temporal Cloud includes the right level of technical support and guidance, services and training needed to onboard you successfully, assist with design and deployment of your application efficiently and at scale. Our team has extensive knowledge of Temporal, and a broad set of skills to help you succeed with any project. Temporal Cloud provides several levels of support, from assisting with for break/fix scenarios to issues and services to helping with onboarding, design/code reviews for your application, and pre-production optimizations and operational readiness. :::note The content of this page applies to Temporal Cloud customers only. ::: ## Services offered by Temporal Cloud {#support} | | Essentials | Business | Enterprise | Mission Critical | | --------------------------- | ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | | Support Staff | Trained staff providing break-fix support and general guidance. | Trained staff providing break-fix support and general guidance. | Developer experts who provide advanced support | Developer experts who provide advanced support | | Technical Guidance | Core platform config, platform access, documented features, and basic inquiries | Advanced technical support, Workflow troubleshooting, SDK implementations, and Worker configuration, Quarterly code review or design implementation best practices. | Business+ expert-led code reviews and design implementation best practices, available as needed | Enterprise+ expert guidance on Workflow latency monitoring and optimization; performance recommendations based on real time tests | | Billing & Cost Optimization | Generic Billing Questions | Generic Billing Questions | Quarterly review of spend | Quarterly review of spend, proactive cost optimization | ## Temporal Cloud support guarantees {#guarantees} Temporal endeavors to ensure you are successful with Temporal Cloud. We offer explicit guarantees for support. Temporal Cloud customers get break/fix support with an agreed-upon set of SLAs for prioritized issues. We use a ticketing system for entering, tracking, and closing these issues. If an issue occurs, the team also provides support through a dedicated Slack channel, forums, and a knowledge base. We offer two levels of support defined by their availability and SLAs in the following table: | | Essentials | Business | Enterprise | Mission Critical | | ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------ | | **Availability**(Based onTime-zones) | **P0–3**: 9–5 Mon–Fri | **P0–3**: 9–5 Mon–Fri | **P0**: 24×7, (On Page Service) **P1–3**: 9–5 Mon–Fri | **P0**: 24×7 (On Page Service) **P1**: 9-5, 7 days/week **P2–3**: Mon–Fri | | **Response Time** | **P0**: 1 business day **P1**: 1 business day **P2**: 1 business day **P3**: 2 business days | **P0**: 2 business hours **P1**: 2 business hours **P2**: 1 business day **P3**: 2 business days | **P0**: 30 minutes **P1**: 1 business hour **P2**: 4 business hours **P3**: 1 business day | **P0**: 15 minutes **P1**: 1 business hour **P2**: 4 business hours **P3**: 1 business day | | **DSE** | - | - | Add-on | DSE Included (1 Unit) | | **Channels** | CommunityTemporal Support Portal | CommunityTemporal Support Portal | CommunityTemporal Support PortalPrivate Slack | CommunityTemporal Support PortalPrivate Slack | :::info Business Hours Timezones Business Hours will be specified in your contract, including one of three locations: US Pacific time, European Central time, Australia Eastern time ::: **Priority definitions** - **P0 - Critical** (Production impacted) - The Temporal Cloud service is unavailable or degraded with a significant impact. - **P1 - High** (Production issue) - An issue related to production workloads running on the Temporal Cloud service, or a significant project is blocked. - **P2 - Normal** (General issues) - General Temporal Cloud service or other issues where there is no production impact, or a workaround exists to mitigate the impact. - **P3 - Low** (General guidance) - Questions or an issue with the Temporal Cloud service that is not impacting system availability or functionality. :::note On Page Service P0: 24×7 (On Page Service) is offered for Enterprise and Mission Critical accounts. ::: For pricing details of these support levels, please visit our [pricing page](/cloud/pricing). ## Temporal Dedicated Support Engineer {#dedicated-support-engineer} Customers on the Mission Critical Plan and (by opting in) Enterprise customers receive access to a Dedicated Support Engineer. We offer: - Direct access to a senior developer expert, who becomes part of your Temporal account team, adding deep technical expertise. - Our high-touch engagement model goes beyond traditional support to deliver transformative value through hands-on collaboration, proactive optimization, implementation design and operations. - Faster issue resolution with direct assistance from someone who already knows your implementation. - Focused advisory on best practices and development pairing to ensure high-quality code and scalability. - Optimizations through regular checks and recommendations to improve performance and efficiency. - Priority access to a senior engineer for up to 20 hours per month, providing expert guidance and proactive support for one business unit or major group, specifically within a single region. Our Services focus on local time zone alignment to ensure optimal responsiveness and efficiency. Additional service units for this service can be purchased to cover additional groups or regions at $6,000/Mo/Unit. One unit of Mission Critical Support includes: - Up to 20 hours per month - One major group or business unit - Limited to one region - Quarterly onsite visits ## Ticketing Temporal offers a ticketing system for Temporal Cloud customers. We have an active [community Slack](https://temporalio.slack.com) and an active [community Discourse forum](https://community.temporal.io/) where you can post questions and ask for help. :::info The Temporal Support Portal is for Cloud customers only. Other Temporal users (non-cloud) have full community access excluding the "#support-cloud" channel. All Cloud customers pay for support as part of their plan. ::: ### Access Temporal Support 1. Go to [support.temporal.io](https://support.temporal.io/). 2. If prompted, log in to Temporal Cloud using the same method you normally use (e.g., Google, Microsoft, email-password, or other methods). 3. You will be presented with a screen where you can view open and closed tickets for your Temporal account, as well as submit a new ticket. To request assistance from Temporal Support, see [Create a ticket](#support-ticket). ### Create a Ticket {#support-ticket} :::info This procedure applies only to Temporal Cloud customers whose contracts include paid support. If you need assistance and don't have paid support, post your request in the [Temporal Community Forum](https://community.temporal.io) or the `#support-cloud` channel of the [Temporal workspace](https://t.mp/slack) in Slack. ::: To create a ticket in the Temporal Support Portal: 1. Go to [support.temporal.io](https://support.temporal.io/). 2. If prompted, log in to Temporal Cloud using the same method you normally use (e.g., Google, Microsoft, email-password, or other methods). 3. Click the **Create Ticket** button in the top right corner. 4. On the **Submit a ticket** page, enter the details of your request into the form. **Name**, **Subject**, and **Description** are required. 5. At the bottom of the form, choose **Submit**. ## Developer resources {#developer-resources} Temporal offers developer resources and a variety of hands-on tutorials to get you started and learn more advanced Temporal concepts. - [Get started with Temporal](https://learn.temporal.io/getting_started): Start your journey with Temporal with this guide that helps you set up your development environment, run an existing Temporal app, and then build your first app from scratch using our SDKs. - [Courses](https://learn.temporal.io/courses): Learn and apply Temporal concepts in our free, self-paced, hands-on courses. - [Tutorials](https://learn.temporal.io/tutorials): Apply Temporal concepts to build real-world applications with these hands-on tutorials. - [Example applications](https://learn.temporal.io/examples): Explore example applications that use Temporal and gain a clearer understanding of how Temporal concepts work in a complex application. --- ## Understanding Temporal Temporal offers an entirely new way to build scalable and reliable applications. ## Build Invincible Apps In any complex system, failures are bound to happen. Software engineers spend a lot of time ensuring that what they build can withstand potential failures. Temporal makes your code execution reliable and durable by default. Normally, if a crash occurs then the state of your application's execution is lost. The application has no memory of what happened before the failure, requiring extensive error handling logic and complex recovery code to resume. The process is time-consuming and error-prone, making it difficult to ensure reliability. Temporal tracks the progress of your application. If something goes wrong, like a power outage, it guarantees that your application can pick up right where it left off — it’s like having the ultimate autosave. Offloading the responsibility of failure management from the application to the platform removes the need for extensive recovery coding, testing, and maintenance tasks. ### Durable Execution Temporal is a Durable Execution Platform. Durable Execution ensures that your application behaves correctly despite adverse conditions by guaranteeing that it will run to completion. This shift simplifies the development process. If a failure or a crash happens, your business processes keep running seamlessly without interruptions. Developers shift their focus on business logic rather than infrastructure concerns and create applications that are inherently scalable and maintainable. Thousands of developers trust Temporal for use cases like order processing, customer onboarding, and payment handling because it enables them to build invincible applications that are resilient, durable, and _just work_. With Temporal, your applications keep running, no matter what happens. ## Temporal Application: The Building Blocks ### Workflow Conceptually, a Workflow is a sequence of steps. You've likely encountered Workflows in your daily life, whether it's: - Using a mobile app to transfer money - Booking a vacation - Filing an expense report - Creating a new employee onboarding process - Deploying cloud infrastructure - Training an AI model A Temporal Workflow is your business logic, defined in code, outlining each step in your process. Temporal isn’t a no-code Workflow engine — it is **Workflows-as-Code**. Instead of dragging and dropping steps in a visual interface, you write your Workflows in code in your favorite programming language, code editor, and other tools. No-code engines eventually hit their limitations however, Temporal gives you full control and flexibility over your business processes. This allows you to build exactly what you need. ### Activities Activities are the individual units of work in your Workflow. Activities are defined as either functions or methods, depending on the programming language. Activities often involve interacting with the outside world, such as sending emails, making network requests, writing to a database, or calling an API, which are prone to failure. You can call Activities directly from your Workflow code. If an Activity fails, Temporal automatically retries it based on your configuration. Since Activities often rely on external systems, transient issues can occur. These include temporary but critical problems like network failures, timeouts, or service outages. You have full control over how often and how many times these retries should happen for each Activity. ### SDK Developers create Temporal applications by writing code, just like you would to create any other software. A Temporal SDK (software development kit) is an open-source library that developers add to their application to use Temporal. It provides everything needed to build Workflows, Activities, and various other Temporal features in a specific programming language. Temporal offers seven SDKs: .NET, Go, Java, PHP, Python, Ruby, TypeScript. Since Temporal supports multiple programming languages, you can mix-and-match between languages for polyglot teams. You can easily add any Temporal SDK to your current projects without changing the tools you're already using to build and deploy. Temporal fits right into your existing tech stack. ## Temporal Service Temporal has two main parts: 1. Your application 2. The Temporal Service (a set of services and components) At the heart of Temporal architecture is the Temporal Service, which provides durability, scalability, and reliability for your application. Your application communicates with the Temporal Service and the Temporal Service oversees the execution of critical tasks such as making an API call, then records their completion. It maintains a detailed history of each event, which it reliably persists to a database. One of the biggest advantages of the Temporal Service is how it handles failures. The Temporal Service maintains a meticulous record of every step in your Workflows. By keeping a history of every step in your Workflow, it ensures that even if something goes wrong your Workflow can continue from the last successful point. The Temporal Service knows exactly where to resume without losing any work. This saves you from having to write complex error handling code or painstaking recovery mechanisms yourself. You can run the Temporal Service on your own infrastructure or use Temporal Cloud, a managed service that handles operational overhead and offers scalability and expert support. ## Workers The real strength of Temporal comes from the combination of your application and the Temporal Service. Whenever your application needs to perform a task, like sending a notification or processing a payment, the Temporal Service orchestrates what needs to be done. Workers, which are part of your application and provided by the Temporal SDK, then carry out the tasks defined in your Workflow. The Worker polls the Temporal Service to see if there are tasks available and the Temporal Service matches the task with the Worker. The Worker runs the Workflow code based on the details specified in the task. This collaboration is crucial for building reliable, scalable, and durable applications. You can run multiple Workers — often dozens, hundreds, or even thousands — to improve application performance and scalability. A common misconception is that the Temporal Service runs your code. In fact, the Worker runs your code and works with your data directly. Temporal applications are secure by design. Workflows and Activities are seamlessly deployed within your infrastructure, fully integrated into your application. Your data is also protected with your encryption libraries and keys. You maintain full control over the security of your application from end to end. ## Visibility There are two tools provided by Temporal that allow you to see behind the scenes and interact with your Workflows. These are powerful for debugging uses and provide real-time monitoring of your applications. ### Temporal UI The Temporal UI is a browser-based user interface that allows you to see the progress of your application. Also, known as the Web UI, it can also help you to quickly isolate, debug, and resolve production problems. ### Temporal CLI The Temporal CLI is a command-line interface tool for managing, monitoring, and debugging Temporal Applications. Through your terminal, you can: - Start a Workflow - Trace the progress of a Workflow - Cancel or terminate a Workflow - And perform other operations The Temporal CLI provides developers with direct access to a Temporal Service for local development purposes. ### Event History With Temporal, your Workflows can seamlessly recover from crashes. This is made possible by the [Event History](https://docs.temporal.io/workflow-execution/event), a complete and durable log of everything that has happened in the lifecycle of a Workflow Execution, as well as the ability of the Temporal Service to durably persist the Events during Replay. Temporal uses the Event History to record every step taken along the way. Each time your Workflow Definition makes an API call to execute an Activity or start a Timer for instance, it doesn’t perform the action directly. Instead, it sends a Command to the Temporal Service. A Command is a requested action issued by a Worker to the Temporal Service after a Workflow Task Execution completes. The Temporal Service will act on these Commands such as scheduling an Activity or scheduling a timer. These Commands are then mapped to Events which are persisted in case of failure. For example, if the Worker crashes, the Worker uses the Event History to replay the code and recreate the state of the Workflow Execution to what it was immediately before the crash. It then resumes progress from the point of failure as if the failure never occurred. For a deep dive into the Event History or Commands, visit the Temporal [Encyclopedia page](/encyclopedia/event-history) or enroll in one of [our courses](https://learn.temporal.io/courses/). ## Reliable as Gravity Temporal provides effortless durability, allowing applications to run for days, weeks, or even years without interruption even if the underlying infrastructure fails. This is what we call _Durable Execution_. Temporal also represents a paradigm shift in software development. It's not just about making existing patterns more reliable; it's about enabling entirely new approaches to building complex, distributed systems. Temporal simplifies state management and developers don't have to write tons of extra code to handle every possible thing that could go wrong. With built-in scalability, Temporal ensures that your application runs smoothly, no matter its size or complexity. :::tip Follow one of our tutorials to [Get Started](https://learn.temporal.io/getting_started/) learning how to use a Temporal SDK. Or, jump straight into an [Introduction to Temporal 101](https://learn.temporal.io/courses/temporal_101/) course. Looking for more? Explore Temporal's [Resource Library](https://temporal.io/resources). ::: --- ## Temporal Use Cases and Design Patterns This page provides an overview of how leading organizations leverage Temporal to solve real-world problems, general use cases, and architectural design patterns. ## Use Cases of Temporal in Production Here are some examples where Temporal is most impactful and running in production at large organizations today. For more examples, see our [Temporal Use Cases](https://temporal.io/in-use) page. ### Transactions Actions or activities involving two or more parties or things that reciprocally affect or influence each other. For example: - [Payment processing at Stripe](https://temporal.io/resources/on-demand/stripe) - [Money movement at Coinbase](https://temporal.io/in-use/coinbase) - [Content management at Box](https://temporal.io/resources/case-studies/box) ### Business processes A sequence of tasks that find their end in the delivery of a service or product to a client. For example: - [Bookings at Turo](https://temporal.io/replay/videos/temporal-adoption-and-integration-at-turo) - [Orders/logistics at Maersk](https://temporal.io/replay/videos/building-a-time-machine-for-the-logistics-industry) - [Marketing Campaigns at AirBnb](https://medium.com/airbnb-engineering/journey-platform-a-low-code-tool-for-creating-interactive-user-workflows-9954f51fa3f8) - [Human-in-the-loop at Checkr](https://temporal.io/in-use/checkr) ### Entity lifecycle Complex long-running processes that accumulate state over time. For example: - [Mortgage underwriting applications at ANZ](https://temporal.io/in-use/anz-story) - [Menu versioning at Yum! Brands](https://temporal.io/replay-2023/videos/synchronizing-concurrent-workflows) ### Operations An automated method for getting a repeatable, mundane task accomplished. For example: - [Infrastructure services at DataDog](https://www.youtube.com/watch?v=Hz7ZZzafBoE) - [Custom CI/CD at Netflix](https://temporal.io/replay-2023/videos/actor-workflows-reliably-orchestrating-thousands-of-flink-clusters-at) ### AI / ML and Data Engineering AI and ML developers face challenges in system orchestration, such as managing complex data pipelines and job coordination across GPU resources. Temporal's code-first approach helps build reliable services faster, making it popular among AI companies. For example: - [Orchestrating video processing at Descript](https://temporal.io/blog/ai-ml-and-data-engineering-workflows-with-temporal#descript) - [Automating data pipelines at Neosync](https://temporal.io/blog/ai-ml-and-data-engineering-workflows-with-temporal#neosync) ### AI Agents AI Agents present new uses for Temporal, such as maintaining state over long periods and enabling seamless human intervention when needed. Temporal ensures Durable Execution of tools, LLMs, and conversations, letting you focus on business logic instead of handling failures. For example: - [Creating reliable, observable Agents at Lindy](https://temporal.io/resources/case-studies/lindy-reliability-observability-ai-agents-temporal-cloud) - [Long-running, durable Agents at Dust](https://temporal.io/blog/how-dust-builds-agentic-ai-temporal) - [Creating account summaries with Agents at ZoomInfo](https://temporal.io/resources/on-demand/account-summaries-gen-ai) ## General Use Cases ### Human in the Loop "Human in the Loop" systems require human interaction for certain steps, such as customer onboarding, forms, or invoice approval. These are event driven system with humans generating events, and may be challenging to implement due to timing or unreliable connections between the human to the rest of the system. They can use schedules and timers to prompt for user input. **Example**: [Background checks example using the Go SDK](https://learn.temporal.io/examples/go/background-checks/). **Code Sample**: [Candidate acceptance example prompting for a response](https://learn.temporal.io/examples/go/background-checks/candidate-acceptance) ### Polyglot Systems Modern development teams often work with different programming languages based on their expertise and project requirements. Temporal supports this through built-in multi-language capabilities, allowing teams to continue using their preferred languages while working together. The example below showcases how Workflow Executions, written in different languages, can send messages to each other. Go, Java, PHP, and TypeScript SDKs are represented in this sample. It also shows how to properly propagate errors, including how to do so across Workflows written in different languages. **Example**: [Polyglot example](https://github.com/temporalio/temporal-polyglot). ### Long Running Tasks This use case is particularly relevant for scenarios like shopping cart Workflows in an eCommerce app, where you can handling long-running tasks efficiently without managing state in a separate database. It processes one message at a time, ensuring each message is processed only once. This approach addresses issues that can arise with long message processing times, which in other systems might cause consumer failover (typically with a default 5-minute message poll timeout) and potentially result in duplicate message processing by multiple consumers. Temporal's ability to handle extended task durations makes it well-suited for such scenarios. The [heartbeat](/encyclopedia/detecting-activity-failures#activity-heartbeat) feature allows you to know that an activity is still working, providing insight into the progress of long-running processes. **Example**: [eCommerce example](https://learn.temporal.io/tutorials/go/build-an-ecommerce-app/). **Code Sample**: [Temporal eCommerce](https://github.com/temporalio/temporal-ecommerce) ## Design Patterns ### Saga The Saga pattern is a design pattern used to manage and handle failures in complex Workflows by breaking down a transaction into a series of smaller, manageable sub-transactions. If a step in the Workflow fails, the Saga pattern compensates for this failure by executing specific actions to undo the previous steps. This ensures that even in the event of a failure, the system can revert to a consistent state. **Examples:** - [Build a trip booking application in Python](https://learn.temporal.io/tutorials/python/trip-booking-app/). - [Saga Pattern with Temporal Whitepaper](https://pages.temporal.io/download-saga-pattern-made-easy) - [To choreograph or orchestrate your saga, that is the question](https://temporal.io/blog/to-choreograph-or-orchestrate-your-saga-that-is-the-question) - [Saga Webinar](https://pages.temporal.io/on-demand-webinar-what-is-a-saga.html) ### State Machine A state machine is a software design pattern used to modify a system’s behavior in response to changes in its state. While state machines are widely used in software development, applying them to complex business processes can be a difficult undertaking. Temporal simplifies the complexity of state machines by providing a structured approach to workflow development, avoiding the intricate state management code required for state machines. **Example**: [State Machine Simplified Whitepaper](https://pages.temporal.io/download-state-machines-simplified.html) :::tip If you're interested in code to help get you started, check out our [Temporal Example Applications](https://learn.temporal.io/examples/), [Getting Starting Tutorials](https://learn.temporal.io/getting_started/), or [Project-based Tutorials](https://learn.temporal.io/tutorials/). ::: --- ## Why Temporal? Temporal solves many problems that developers face while building distributed applications. But most of them revolve around these three themes: - Reliable distributed applications - Productive development paradigms and code structure - Visible distributed application state :::tip See Temporal in action Watch the following video to see how Temporal ensures an order-fulfillment system can recover from various failures, from process crashes to unreachable APIs. ::: ## Reliable execution **How does Temporal make applications reliable?** Temporal makes it easier for developers to build and operate reliable, scalable applications without sacrificing productivity. The design of the system ensures that, once started, an application's main function executes to completion, whether that takes minutes, hours, days, weeks, or even years. Temporal calls this _Durable Execution._ ## Code structure **How does Temporal simplify application code for software developers?** By shifting the burden of failure handling from the application to the platform, there is less code for application developers to write, test, and maintain. Temporal's programming model offers developers a way to express their business logic into coherent _Workflows_ that are much easier to develop than distributed code bases. Choose the SDK that best suits your preferred programming language and start writing your business logic. Integrate your favorite IDE, libraries, and tools into your development process. Temporal also supports polyglot and idiomatic programming - which enables developers to leverage the strengths of various programming languages and integrate Temporal into existing codebases. Developers achieve all of this without having to manage queues or complex state machines. ## State visibility **How does Temporal make it easier to view the state of the application?** Temporal provides out-of-the-box tooling that enables developers to see the state of their applications whenever they need to. The Temporal CLI allows developers to manage, monitor, and debug Temporal applications effectively. The browser-based Web UI lets you quickly isolate, debug, and resolve production problems. --- ## Getting started with Temporal Temporal offers a range of SDKs to help you build Temporal applications. The SDKs are available for .NET, Go, Java, PHP, Python, Ruby, TypeScript. ## Temporal Go SDK Get started with the [Temporal Go SDK](https://learn.temporal.io/getting_started/go). [](https://learn.temporal.io/getting_started/go) ## Temporal Java SDK Get started with the [Temporal Java SDK](https://learn.temporal.io/getting_started/java). [](https://learn.temporal.io/getting_started/java) ## Temporal PHP SDK Get started with the [Temporal PHP SDK](https://learn.temporal.io/getting_started/php). [](https://learn.temporal.io/getting_started/php) ## Temporal Python SDK Get started with the [Temporal Python SDK](https://learn.temporal.io/getting_started/python). [](https://learn.temporal.io/getting_started/python) ## Temporal TypeScript SDK Get started with the [Temporal TypeScript SDK](https://learn.temporal.io/getting_started/typescript). [](https://learn.temporal.io/getting_started/typescript) --- ## Temporal Docs --- ## Codec Server - Temporal Platform feature guide Temporal Server stores and persists the data handled in your Workflow Execution. Encrypting this data ensures that any sensitive application data is secure when handled by the Temporal Server. For example, if you have sensitive information passed in the following objects that are persisted in the Workflow Execution Event History, use encryption to secure it: - Inputs and outputs/results in your [Workflow](/workflow-execution), [Activity](/activity-execution), and [Child Workflow](/child-workflows) - [Signal](/sending-messages#sending-signals) inputs - [Memo](/workflow-execution#memo) - Headers (verify if applicable to your SDK) - [Query](/sending-messages#sending-queries) inputs and results - Results of [Local Activities](/local-activity) and [Side Effects](/workflow-execution/event#side-effect) - [Application errors and failures](/references/failures). Failure messages and call stacks are not encoded as codec-capable Payloads by default; you must explicitly enable encoding these common attributes on failures. For more details, see [Failure Converter](/failure-converter). Using encryption ensures that your sensitive data exists unencrypted only on the Client and the Worker Process that is executing the Workflows and Activities, on hosts that you control. By default, your data is serialized to a [Payload](/dataconversion#payload) by a [Data Converter](/dataconversion). To encrypt your Payload, configure your custom encryption logic with a [Payload Codec](/payload-codec) and set it with a [custom Data Converter](/default-custom-data-converters#custom-data-converter). A Payload Codec does byte-to-byte conversion to transform your Payload (for example, by implementing compression and/or encryption and decryption) and is an optional step that happens between the Client and the [Payload Converter](/payload-converter): You can run your Payload Codec with a [Codec Server](/codec-server) and use the Codec Server endpoints in the Web UI and CLI to decode your encrypted Payload locally. For details on how to set up a Codec Server, see [Codec Server setup](#codec-server-setup). However, if you plan to set up [remote data encoding](/remote-data-encoding) for your data, ensure that you consider all security implications of running encryption remotely before implementing it. When implementing a custom codec, it is recommended to perform your compression or encryption on the entire input Payload and store the result in the data field of a new Payload with a different encoding metadata field. This ensures that the input Payload's metadata is preserved. When the encoded Payload is sent to be decoded, you can verify the metadata field before applying the decryption. If your Payload is not encoded, it is recommended to pass the unencoded data to the decode function instead of failing the conversion. Examples for implementing encryption: - [Go sample](https://github.com/temporalio/samples-go/tree/main/encryption) - [Java sample](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/encryptedpayloads) - [Python sample](https://github.com/temporalio/samples-python/tree/main/encryption) - [TypeScript sample](https://github.com/temporalio/samples-typescript/tree/main/encryption) - [.NET sample](https://github.com/temporalio/samples-dotnet/tree/main/src/Encryption) ## Codec Server setup {#codec-server-setup} Use a Codec Server to programmatically decode your encoded [payloads](/dataconversion#payload). A Codec Server is an HTTP server that uses your custom Codec logic to decode your data remotely. The Codec Server is independent of the Temporal Service and decodes your encrypted payloads through predefined endpoints. You create, operate, and manage access to your Codec Server in your own environment. The Temporal CLI and the Web UI in turn provide built-in hooks to call the Codec Server to decode encrypted payloads on demand. The Codec Server is independent of the Temporal Server and decodes your encrypted payloads through endpoints. When you configure a Codec Server endpoint in the Temporal Web UI or CLI, the Web UI and CLI use the remote endpoint to receive decoded payloads from the Codec Server. See [API contract requirements](#api-contract-specifications). Decoded payloads can then be displayed in the Workflow Execution Event History on the Web UI. Note that when you use a Codec Server, the decoded payloads are decoded and returned on the client side only; payloads on the Temporal Server (whether on Temporal Cloud or a self-hosted Temporal Service) remain encrypted. Because you create, operate, and manage access to your Codec Server in your controlled environment, ensure that you consider the following: - When you register a Codec Server endpoint with your Web UI, expect the Codec Server to receive multiple requests per Workflow Execution. - Ensure that you secure access to your Codec Server. For details, see [Authorization](#authorization). You might need some form of [Key management infrastructure](/key-management) for sharing your encryption keys between the Workers and your Codec Server. - You will need to enable [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) on the HTTP/HTTPS endpoints in your Codec Server to receive requests from the Temporal Web UI. - You may introduce latency in the Web UI when sending and receiving payloads to the Codec Server. Your Codec Server should share logic with the custom [Payload Codec](/payload-codec) used elsewhere in your application. ### API contract specifications When you create your Codec Server to handle requests from the Web UI, the following requirements must be met. #### Endpoints The Web UI and CLI send a POST to a `/decode` endpoint. In your Codec Server, create a `/decode` path and pass the incoming payload to the decode method in your Payload Codec. For examples on how to create your Codec Server, see the following Codec Server implementation samples: - [Go](https://github.com/temporalio/samples-go/tree/main/codec-server) - [Java](https://github.com/temporalio/sdk-java/tree/master/temporal-remote-data-encoder) - [Python](https://github.com/temporalio/samples-python/blob/main/encryption/codec_server.py) - [TypeScript](https://github.com/temporalio/samples-typescript/blob/main/encryption/src/codec-server.ts) - [.NET](https://github.com/temporalio/samples-dotnet/blob/main/src/Encryption/CodecServer/Program.cs) You can also add a [verification step](#authorization) to check whether the incoming request has the required authorization to access the decode logic in your Payload Codec. #### Headers Each request from the Web UI to your Codec Server includes the following headers: - `Content-Type: application/json`: Ensure that your Codec Server can accommodate this [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types). - `X-Namespace: {namespace}`: This is a custom HTTP Header. Ensure that the [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) configuration in your Codec Server includes this header. - [Optional] `Authorization: `: Include this in your CORS configuration when enabling authorization with your Codec Server. For details on setting up authorization, see [Authorization](#authorization). #### Request body The general specification for the `POST` request body contains payloads. By default, all field values in your payload are base64 encoded, regardless of whether they are encrypted by your custom codec implementation. The following example shows a sample `POST` request body with base64 encoding. ```json { "payloads": [{ "metadata": { "encoding": }, "data": }, ...] } ``` #### CORS By default, in cross-origin Fetch/XHR invocations, browsers will not send credentials. Enable [Cross-Origin Resource Sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) requests on your Codec Server to receive HTTP/HTTPS requests from the Temporal Web UI. At a minimum, enable the following responses from your Codec Server to allow requests coming from the Temporal Web UI: - `Access-Control-Allow-Origin` - `Access-Control-Allow-Methods` - `Access-Control-Allow-Headers` For example, for Temporal Cloud Web UI hosted at https://cloud.temporal.io, enable the following in your Codec Server: - `Access-Control-Allow-Origin: https://cloud.temporal.io` - `Access-Control-Allow-Methods: POST, GET, OPTIONS` - `Access-Control-Allow-Headers: X-Namespace, Content-Type` For details on what a sample request/response looks like from the Temporal Web UI, see [Sample Request/Response](#sample-requestresponse). If setting authorization, include `Authorization` in your `Access-Control-Allow-Headers`. For details on setting up authorization, see [Authorization](#authorization). #### Authorization It is important to establish how you will provide access to your Codec Server. Because it is designed to decode potentially sensitive data with a single API call, access to a production Codec Server should be restricted. Depending on your infrastructure and risk levels, it might be sufficient to restrict HTTP ingress to your Codec Server (such as by using a VPN like [WireGuard](https://www.wireguard.com/)). The Temporal Web UI can communicate with a Codec Server that is only accessible on `localhost`, so this is a legitimate security pattern. However, if your Codec Server is exposed to the internet at all, you will likely need an authentication solution. If you are already using an organization-wide authentication provider, you should integrate it with your Codec Server. Remember, a Codec Server is just a standalone HTTP server, so you can use existing libraries for OAuth, [Auth0](https://auth0.com/), or any other protocol. [This repository](https://github.com/pvsone/codec-cors-credentials) contains an example of using Auth0 to handle browser-based auth to a Codec Server. To enable authorization from the Web UI (for both a self-hosted Temporal Service and Temporal Cloud), your Codec Server must be an HTTPS Server. **Temporal Cloud** The Temporal Cloud UI provides an option to pass access tokens (JWT) to your Codec Server endpoints. Use the access tokens to validate access and then return decoded payloads from the Codec Server. You can enable this by selecting **Pass access token** in your Codec Server endpoint interface where you add your endpoint. Enabling this option in the Temporal Cloud UI adds an authorization header to each request sent to the Codec Server endpoint that you set. In your Codec Server implementation, verify the signature on this access token (in your authorization header) against [our JWKS endpoint](https://login.tmprl.cloud/.well-known/jwks.json). {/* Commenting this for now. _/} {/_ If you want to unpack the claims in your token to add additional checks on whether the user has valid access to the Namespace and payloads they are trying to access, you can implement it using Auth0 SDKs, middleware, or one of the third-party libraries at JWT.io. */} The token provided from Temporal Cloud UI contains the email identifier of the person requesting access to the payloads. Based on the permissions you have provided to the user in your access control systems, set conditions in your Codec Server whether to return decoded payloads or just return the original encoded payloads. **Self-hosted Temporal Service** On a self-hosted Temporal Service, configure [authorization in the Web UI configuration](/references/web-ui-configuration#auth) in your Temporal Service setup. With this enabled, you can pass access tokens to your Codec Server and validate the requests from the Web UI to the Codec Server endpoints that you set. Note that with a self-hosted Temporal Service, you must explicitly configure authorization specifications for the Web UI and CLI. #### Sample request/response Consider the following sample request/response when creating and hosting a Codec Server with the following specifications: - Scheme: `https` - Host: `dev.mydomain.com/codec` - Path: `/decode` ```json HTTP/1.1 POST /decode Host: https://dev.mydomain.com/codec Content-Type: application/json X-Namespace: myapp-dev.acctid123 Authorization: Bearer {"payloads":[{"metadata":{"encoding":"anNvbi9wcm90b2J1Zg==","messageType":"dGVtcG9yYWxfc2hvcC5vcmNoZXN0cmF0aW9ucy52MS5TdGFydFNob3BwaW5nQ2FydFJlcXVlc3Q="},"data":"eyJjYXJ0SWQiOiJleGFtcGxlLWNhcnQiLCJzaG9wcGVySWQiOiJ5b3VyLXNob3BwZXItaWQtZXhhbXBsZSIsImVtYWlsIjoieW91ci1lbWFpbEBkb21haW4uY29tIn0"}]} 200 OK Content-Type: application/json { "payloads": [{ "metadata":{ "encoding": "json/protobuf", "messageType": "temporal_shop.orchestrations.v1.StartShoppingCartRequest" }, "data":{ "cartId":"example-cart", "shopperId":"your-shopper-id-example", "email":"your-email@domain.com" }}] } ``` You can also perform remote encoding on an `/encode` endpoint, which looks the same in reverse: - Scheme: `https` - Host: `dev.mydomain.com/codec` - Path: `/encode` ```json HTTP/1.1 POST /encode Host: https://dev.mydomain.com/codec Content-Type: application/json X-Namespace: myapp-dev.acctid123 Authorization: Bearer {"payloads":[{"metadata":{"encoding":"json/protobuf","messageType":"temporal_shop.orchestrations.v1.StartShoppingCartRequest"},"data":{"cartId":"example-cart","shopperId":"your-shopper-id-example","email":"your-email@domain.com"}}]} 200 OK Content-Type: application/json { "payloads": [ { "metadata": { "encoding": "anNvbi9wcm90b2J1Zg==", "messageType": "dGVtcG9yYWxfc2hvcC5vcmNoZXN0cmF0aW9ucy52MS5TdGFydFNob3BwaW5nQ2FydFJlcXVlc3Q=" }, "data": "eyJjYXJ0SWQiOiJleGFtcGxlLWNhcnQiLCJzaG9wcGVySWQiOiJ5b3VyLXNob3BwZXItaWQtZXhhbXBsZSIsImVtYWlsIjoieW91ci1lbWFpbEBkb21haW4uY29tIn0" } ] } ``` ### Set your Codec Server endpoints with Web UI and CLI After you create your Codec Server and expose the requisite endpoints, set the endpoints in your Web UI and CLI. #### Web UI On Temporal Cloud and self-hosted Temporal Service, you can configure a Codec Server endpoint to be used for a Namespace in the Web UI. To set a Codec Server endpoint on a Namespace, do the following. 1. In the Web UI, go to Namespaces, select the Namespace where you want to configure the Codec Server endpoint, and click **Edit**. 1. In the **Codec Server** section on the Namespace configuration page, enter your Codec Server endpoint and port number. 1. Optional: If your Codec Server is configured to [authenticate requests](#authorization) from Temporal Web UI, enable **Pass access token** to send a JWT access token with the HTTPS requests. 1. Optional: If your Codec Server is configured to [verify origins of requests](#cors), enable **Include cross-origin credentials**. On Temporal Cloud, you must have [Namespace Admin privileges](/cloud/users#namespace-level-permissions) to add a Codec Server endpoint on the Namespace. Setting a Codec Server endpoint on a Cloud Namespace enables it for all users on the Namespace. Setting a Codec Server endpoint on a self-hosted Temporal Service enables it for the entire Temporal Service. You can use a single Codec Server to handle different encoding and decoding routes for each namespace. You can also override the global Codec Server setting at the browser level. This can be useful when developing, testing, or troubleshooting encoding functionality. To set a browser override for the Namespace-level endpoint, do the following. 1. Navigate to **Workflows** in your Namespace. 2. In the top-right corner, select **Configure Codec Server**. 3. Select whether you want to use the Namespace-level (or Temporal Service-level for self-hosted Temporal Service) or the browser-level Codec Endpoint setting as the default for your browser. In Temporal Cloud: - **Use Namespace-level settings, where available. Otherwise, use my browser setting.** Uses the Namespace-level Codec Server endpoint by default. If no endpoint is set on the Namespace, your browser setting is applied. - **Use my browser setting and ignore Namespace-level setting.** Applies your browser-level setting by default, overriding the Namespace-level Codec Server endpoint. 4. Enter your Codec Server endpoint and port number. 5. Optional: If your Codec Server is configured to [authenticate requests](#authorization) from Temporal Web UI, enable **Pass access token** to send a JWT access token with the HTTPS requests. 6. Optional: If your Codec Server is configured to [verify origins of requests](#cors), enable **Include cross-origin credentials**. In a self-hosted Temporal Service with dedicated UI Server configuration, you can also set the codec endpoint in the UI server [configuration file](/references/web-ui-configuration#codec): ```yaml codec: endpoint: {{ default .Env.TEMPORAL_CODEC_ENDPOINT "{namespace}"}} ``` #### CLI You can configure a Codec Server endpoint with the Temporal CLI using the `--codec-endpoint` flag. For example, if you are running your Codec Server on `http://localhost:8888`, you can use `env set` to set the endpoint globally: ```bash temporal env set --codec-endpoint "http://localhost:8888" ``` If your Codec Server endpoint is not set globally, provide the `--codec-endpoint` option with each command. For example, to see the decoded output of the Workflow Execution "yourWorkflow" in the Namespace "yourNamespace", run: ```bash temporal --codec-endpoint "http://localhost:8888" --namespace "yourNamespace" workflow show --workflow-id "yourWorkflow" --run-id "" --output "table" ``` For details, see the [CLI reference](/cli/). If your Codec Server requires authentication, the Temporal CLI will also accept a `--codec-auth` parameter to supply an authorization header: ```shell temporal workflow show \ --workflow-id converters_workflowID \ --codec-endpoint 'http://localhost:8081/{namespace}' \ --codec-auth 'auth-header' ``` ### Working with Large Payloads Codec Servers can be used for more than encryption and decryption of sensitive data. Codec Server behavior is left up to implementers -- they can also call external services or perform other tasks, as long as they hook in at the encoding and decoding stages of a Workflow payload. By default, Temporal limits payload size to 4MB. If this limitation is problematic for your use case, you could implement a codec that persists your payloads to an object store outside of workflow histories. An example implementation is available from [DataDog](https://github.com/DataDog/temporal-large-payload-codec). ### Temporal Nexus The Data Converter works the same for a Nexus Operation as it does for other payloads sent between a Worker and Temporal Cloud. Both the caller and handler Workers must use compatible Data Converters to pass operation inputs and results between them. See [Nexus Payload Encryption & Data Converter](/nexus/security#payload-encryption-data-converter) for details. --- ## Temporal Platform production deployments **Ready to elevate your durable application into production?** To take your application to production, you deploy your application code, including your Workflows, Activities, and Workers, on your infrastructure using your existing build, test and deploy tools. Then you need a production-ready Temporal Service to coordinate the execution of your Workflows and Activities. You can use Temporal Cloud, a fully-managed platform, or you can self-host the service. ## Use Temporal Cloud You can let us handle the operations of running the Temporal Service, and focus on your application. Follow the [Temporal Cloud guide](/cloud) to get started. ## Run a Self-hosted Temporal Service Alternatively, you can run your own production level Temporal Service to orchestrate your durable applications. Follow the [Self-hosted guide](/self-hosted-guide) to get started. ## Worker deployments Whether you're hosting with Temporal Cloud or on your own, you have control over where to run and scale your Workers. We provide guidance on [Worker Deployments](/production-deployment/worker-deployments). --- ## Multi-tenant application patterns Many SaaS providers and large enterprise platform teams use a single Temporal [Namespace](/namespaces) with [per-tenant Task Queues](#1-task-queues-per-tenant-recommended) to power their multi-tenant applications. This approach maximizes resource efficiency while maintaining logical separation between tenants. This guide covers architectural patterns, design considerations, and practical examples for building multi-tenant applications with Temporal. ## Architectural principles When designing a multi-tenant Temporal application, follow these principles: - **Define your tenant model** - Determine what constitutes a tenant in your business (customers, pricing tiers, teams, etc.) - **Prefer simplicity** - Start with the simplest pattern that meets your needs - **Understand Temporal limits** - Design within the constraints of your Temporal deployment - **Test at scale** - Performance testing must drive your capacity decisions - **Plan for growth** - Consider how you'll onboard new tenants and scale workers ## Architectural patterns There are three main patterns for multi-tenant applications in Temporal, listed from most to least recommended: ### 1. Task queues per tenant (Recommended) **Use different [Task Queues](/task-queue) for each tenant's [Workflows](/workflows) and [Activities](/activities).** This is the recommended pattern for most use cases. Each tenant gets dedicated Task Queue(s), with [Workers](/workers) polling multiple tenant Task Queues in a single process. **Pros:** - Strong isolation between tenants - Efficient resource utilization - Flexible worker scaling - Easy to add new tenants - Can handle thousands of tenants per [Namespace](/namespaces) **Cons:** - Requires worker configuration management - Potential for uneven resource distribution - Need to prevent "noisy neighbor" issues at the worker level ### 2. Shared Workflow Task Queues, separate Activity Task Queues **Share [Workflow Task Queues](/task-queue) but use different [Activity Task Queues](/task-queue) per tenant.** Use this pattern when [Workflows](/workflows) are lightweight but [Activities](/activities) have heavy resource requirements or external dependencies that need isolation. **Pros:** - Easier worker management than full isolation - Activity-level tenant isolation - Good for compute-intensive Activities **Cons:** - Less isolation than pattern #1 - Workflow visibility is shared - More complex to reason about ### 3. Namespace per tenant **Use a separate [Namespace](/namespaces) for each tenant.** Only practical for a small number (< 50) of high-value tenants due to operational overhead. **Pros:** - Complete isolation between tenants - Per-tenant rate limiting - Maximum security **Cons:** - Higher operational overhead - Credential and connectivity management per [Namespace](/namespaces) - Requires more [Workers](/workers) (minimum 2 per Namespace for high availability) - Expensive at scale ## Task Queue isolation pattern This section details the recommended pattern for most multi-tenant applications. ### Worker design When a [Worker](/workers) starts up: 1. **Load tenant configuration** - Retrieve the list of tenants this Worker should handle (from config file, API, or database) 2. **Create [Task Queues](/task-queue)** - For each tenant, generate a unique Task Queue name (e.g., `customer-{tenant-id}`) 3. **Register [Workflows](/workflows) and [Activities](/activities)** - Register your Workflow and Activity implementations once, passing the tenant-specific Task Queue name 4. **Poll multiple Task Queues** - A single Worker process polls all assigned tenant Task Queues ```go // Example: Go worker polling multiple tenant Task Queues for _, tenant := range assignedTenants { taskQueue := fmt.Sprintf("customer-%s", tenant.ID) worker := worker.New(client, taskQueue, worker.Options{}) worker.RegisterWorkflow(YourWorkflow) worker.RegisterActivity(YourActivity) } ``` ### Routing requests to Task Queues Your application needs to route [Workflow](/workflows) starts and other operations to the correct tenant [Task Queue](/task-queue): ```go // Example: Starting a Workflow for a specific tenant taskQueue := fmt.Sprintf("customer-%s", tenantID) workflowOptions := client.StartWorkflowOptions{ ID: workflowID, TaskQueue: taskQueue, } ``` Consider creating an API or service that: - Maps tenant IDs to Task Queue names - Tracks which [Workers](/workers) are handling which tenants - Allows both your application and Workers to read the mappings of: 1. Tenant IDs to Task Queues 1. Workers to tenants ### Capacity planning Key questions to answer through performance testing: **[Namespace](/namespaces) capacity:** - How many concurrent [Task Queue](/task-queue) pollers can your Namespace support? - What are your [Actions Per Second (APS)](/cloud/limits#actions-per-second) limits? - What are your [Operations Per Second (OPS)](/references/operation-list) limits? **[Worker](/workers) capacity:** - How many tenants can a single Worker process handle? - What are the CPU and memory requirements per tenant? - How many concurrent [Workflow](/workflows) executions per tenant? - How many concurrent [Activity](/activities) executions per tenant? **SDK configuration to tune:** - `MaxConcurrentWorkflowTaskExecutionSize` - `MaxConcurrentActivityExecutionSize` - `MaxConcurrentWorkflowTaskPollers` - `MaxConcurrentActivityTaskPollers` - Worker replicas (in Kubernetes deployments) ### Provisioning new tenants Automate tenant onboarding with a Temporal [Workflow](/workflows): 1. Create a tenant onboarding Workflow that: - Validates tenant information - Provisions infrastructure - Deploys/updates [Worker](/workers) configuration - Triggers Worker restarts or scaling - Verifies the tenant is operational 2. Store tenant-to-Worker mappings in a database or configuration service 3. Update Worker deployments to pick up new tenant assignments ## Practical example **Scenario:** A SaaS company has 1,000 customers and expects to grow to 5,000 customers over 3 years. They have 2 [Workflows](/workflows) and ~25 [Activities](/activities) per Workflow. All customers are on the same tier (no segmentation yet). ### Assumptions | Item | Value | |------|-------| | Current customers | 1,000 | | Workflow Task Queues per customer | 1 | | Activity Task Queues per customer | 1 | | Max Task Queue pollers per Namespace | 5,000 | | SDK concurrent Workflow task pollers | 5 | | SDK concurrent Activity task pollers | 5 | | Max concurrent Workflow executions | 200 | | Max concurrent Activity executions | 200 | ### Capacity calculations **[Task Queue](/task-queue) poller limits:** - Each [Worker](/workers) uses 10 pollers per tenant (5 Workflow + 5 Activity) - Maximum Workers in [Namespace](/namespaces): 5,000 pollers ÷ 10 = **500 Workers** **Worker capacity:** - Each Worker can theoretically handle 200 [Workflows](/workflows) and 200 [Activities](/activities) concurrently - Conservative estimate: **250 tenants per Worker** (accounting for overhead) - For 1,000 customers: **4 Workers minimum** (plus replicas for HA) - For 5,000 customers: **20 Workers minimum** (plus replicas for HA) **Namespace capacity:** - At 250 tenants per Worker, need 2 Workers per group of tenants (for HA) - Maximum tenants in Namespace: (500 Workers ÷ 2) × 250 = **62,500 tenants** :::note These are theoretical calculations based on SDK defaults. **Always perform load testing** to determine actual capacity for your specific workload. Monitor CPU, memory, and Temporal metrics during testing. While testing, also pay attention to your [metrics capacity and cardinality](/cloud/metrics/openmetrics/api-reference#managing-high-cardinality). ::: ### Worker assignment strategies **Option 1: Static configuration** - Each [Worker](/workers) reads a config file listing assigned tenant IDs - Simple to implement - Requires deployment to add tenants **Option 2: Dynamic API** - Workers call an API on startup to get assigned tenants - Workers identified by static ID (1 to N) - API returns tenant list based on Worker ID - More flexible, no deployment needed for new tenants ## Best practices ### Monitoring Track these [metrics](/references/sdk-metrics) per tenant: - [Workflow completion](/cloud/metrics/openmetrics/metrics-reference#workflow-completion-metrics) rates - [Activity execution](/cloud/metrics/openmetrics/metrics-reference#task-queue-metrics) rates - [Task Queue backlog](/cloud/metrics/openmetrics/metrics-reference#task-queue-metrics) - [Worker resource utilization](/references/sdk-metrics#worker_task_slots_used) - [Workflow failure rates](/encyclopedia/detecting-workflow-failures) ### Handling noisy neighbors Even with [Task Queue](/task-queue) isolation, monitor for tenants that: - Generate excessive load - Have high failure rates - Cause [Worker](/workers) resource exhaustion Strategies: - Implement per-tenant rate limiting in your application - Move problematic tenants to dedicated Workers - Use [Workflow](/workflows)/[Activity](/activities) timeouts aggressively ### Tenant lifecycle Plan for: - **Onboarding** - Automated provisioning [Workflow](/workflows) - **Scaling** - When to add new [Workers](/workers) for growing tenants - **Offboarding** - Graceful tenant removal and data cleanup - **Rebalancing** - Redistributing tenants across Workers ### Search Attributes Use [Search Attributes](/search-attribute) to enable tenant-scoped queries: ```go // Add tenant ID as a Search Attribute searchAttributes := map[string]interface{}{ "TenantId": tenantID, } ``` This allows filtering [Workflows](/workflows) by tenant in the UI and SDK: ```sql TenantId = 'customer-123' AND ExecutionStatus = 'Running' ``` ## Related resources --- ## Self-hosted Archival setup Archival is a feature that automatically backs up [Event Histories](/workflow-execution/event#event-history) and Visibility records from Temporal Service persistence to a custom blob store. - [How to create a custom Archiver](#custom-archiver) - [How to set up Archival](#set-up-archival) Workflow Execution Event Histories are backed up after the [Retention Period](/temporal-service/temporal-server#retention-period) is reached. Visibility records are backed up immediately after a Workflow Execution reaches a Closed status. Archival enables Workflow Execution data to persist as long as needed, while not overwhelming the Temporal Service's persistence store. This feature is helpful for compliance and debugging. Temporal's Archival feature is considered **experimental** and not subject to normal [versioning and support policy](/temporal-service/temporal-server#versions-and-support). Archival is not supported when running Temporal through Docker. It's disabled by default when installing the system manually and when deploying through [helm charts](https://github.com/temporalio/helm-charts/blob/main/charts/temporal/templates/server-configmap.yaml). It can be enabled in the [config](https://github.com/temporalio/temporal/blob/main/config/development.yaml). ### How to set up Archival {#set-up-archival} [Archival](/temporal-service/archival) consists of the following elements: - **Configuration:** Archival is controlled by the [server configuration](https://github.com/temporalio/temporal/blob/main/config/development.yaml#L81) (i.e. the `config/development.yaml` file). - **Provider:** Location where the data should be archived. Supported providers are S3, GCloud, and the local file system. - **URI:** Specifies which provider should be used. The system uses the URI schema and path to make the determination. Take the following steps to set up Archival: 1. [Set up the provider](#providers) of your choice. 2. [Configure Archival](#configuration). 3. [Create a Namespace](#namespace-creation) that uses a valid URI and has Archival enabled. #### Providers Temporal directly supports several providers: - **Local file system**: The [filestore archiver](https://github.com/temporalio/temporal/tree/main/common/archiver/filestore) is used to archive data in the file system of whatever host the Temporal server is running on. In the case of [temporal helm-charts](https://github.com/temporalio/helm-charts), the archive data is stored in the `history` pod. APIs do not function with the filestore archive. This provider is used mainly for local installations and testing and should not be relied on for production environments. - **Google Cloud**: The [gcloud archiver](https://github.com/temporalio/temporal/tree/main/common/archiver/gcloud) is used to connect and archive data with [Google Cloud](https://cloud.google.com/storage). - **S3**: The [s3store archiver](https://github.com/temporalio/temporal/tree/main/common/archiver/s3store) is used to connect and archive data with [S3](https://aws.amazon.com/s3). - **Custom**: If you want to use a provider that is not currently supported, you can [create your own archiver](#custom-archiver) to support it. Make sure that you save the provider's storage location URI in a place where you can reference it later, because it is passed as a parameter when you [create a Namespace](#namespace-creation). #### Configuration Archival configuration is defined in the [`config/development.yaml`](https://github.com/temporalio/temporal/blob/main/config/development.yaml#L93) file. Let's look at an example configuration: ```yaml --- # Temporal Service level Archival config archival: # Event History configuration history: # Archival is enabled at the Temporal Service level state: 'enabled' enableRead: true # Namespaces can use either the local filestore provider or the Google Cloud provider provider: filestore: fileMode: '0666' dirMode: '0766' gstorage: credentialsPath: '/tmp/gcloud/keyfile.json' --- # Default values for a Namespace if none are provided at creation namespaceDefaults: # Archival defaults archival: # Event History defaults history: state: 'enabled' # New Namespaces will default to the local provider URI: 'file:///tmp/temporal_archival/development' ``` You can disable Archival by setting `archival.history.state` and `namespaceDefaults.archival.history.state` to `"disabled"`. Example: ```yaml archival: history: state: 'disabled' namespaceDefaults: archival: history: state: 'disabled' ``` The following table showcases acceptable values for each configuration and what purpose they serve. | Config | Acceptable values | Description | | ---------------------------------------------- | ---------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | | `archival.history.state` | `enabled`, `disabled` | Must be `enabled` to use the Archival feature with any Namespace in the Temporal Service. | | `archival.history.enableRead` | `true`, `false` | Must be `true` to read from the archived Event History. | | `archival.history.provider` | Sub provider configs are `filestore`, `gstorage`, `s3`, or `your_custom_provider`. | Default config specifies `filestore`. | | `archival.history.provider.filestore.fileMode` | File permission string | File permissions of the archived files. We recommend using the default value of `"0666"` to avoid read/write issues. | | `archival.history.provider.filestore.dirMode` | File permission string | Directory permissions of the archive directory. We recommend using the default value of `"0766"` to avoid read/write issues. | | `namespaceDefaults.archival.history.state` | `enabled`, `disabled` | Default state of the Archival feature whenever a new Namespace is created without specifying the Archival state. | | `namespaceDefaults.archival.history.URI` | Valid URI | Must be a URI of the file store location and match a schema that correlates to a provider. | Additional resources: [Temporal Service configuration reference](/references/configuration). #### Namespace creation Although Archival is configured at the Temporal Service level, it operates independently within each Namespace. If an Archival URI is not specified when a Namespace is created, the Namespace uses the value of `defaultNamespace.archival.history.URI` from the `config/development.yaml` file. The Archival URI cannot be changed after the Namespace is created. Each Namespace supports only a single Archival URI, but each Namespace can use a different URI. A Namespace can safely switch Archival between `enabled` and `disabled` states as long as Archival is enabled at the Temporal Service level. Archival is supported in [Global Namespaces](/global-namespace) (Namespaces that span multiple clusters). When Archival is running in a Global Namespace, it first runs on the active cluster; later it runs on the standby cluster. Before archiving, a history check is done to see what has been previously archived. #### Test setup To test Archival locally, start by running a Temporal server: ```bash ./temporal-server start ``` Then register a new Namespace with Archival enabled. {/* ./tctl --ns samples-namespace namespace register --gd false --history_archival_state enabled --retention 3 */} ```bash ./temporal operator namespace create --namespace="my-namespace" --global false --history-archival-state="enabled" --retention="4d" ``` :::note If the retention period isn't set, it defaults to 72h. The minimum retention period is one day. The maximum retention period is 30 days. Setting the retention period to 0 results in the error _A valid retention period is not set on request_. ::: Next, run a sample Workflow such as the [helloworld temporal sample](https://github.com/temporalio/temporal-go-samples/tree/master/helloworld). When execution is finished, Archival occurs. #### Retrieve archives You can retrieve archived Event Histories by copying the `workflowId` and `runId` of the completed Workflow from the log output and running the following command: {/* ./tctl --ns samples-namespace wf show --wid --rid */} ```bash ./temporal workflow show --workflow-id="my-workflow-id" --run-id="my-run-id" --namespace="my-namespace" ``` ### How to create a custom Archiver {#custom-archiver} To archive data with a given provider, using the [Archival](/temporal-service/archival) feature, Temporal must have a corresponding Archiver component installed. The platform does not limit you to the existing providers. To use a provider that is not currently supported, you can create your own Archiver. #### Create a new package The first step is to create a new package for your implementation in [/common/archiver](https://github.com/temporalio/temporal/tree/main/common/archiver). Create a directory in the archiver folder and arrange the structure to look like the following: ``` temporal/common/archiver - filestore/ -- Filestore implementation - provider/ - provider.go -- Provider of archiver instances - yourImplementation/ - historyArchiver.go -- HistoryArchiver implementation - historyArchiver_test.go -- Unit tests for HistoryArchiver - visibilityArchiver.go -- VisibilityArchiver implementations - visibilityArchiver_test.go -- Unit tests for VisibilityArchiver ``` #### Archiver interfaces Next, define objects that implement the [HistoryArchiver](https://github.com/temporalio/temporal/blob/main/common/archiver/interface.go#L80) and the [VisibilityArchiver](https://github.com/temporalio/temporal/blob/main/common/archiver/interface.go#L121) interfaces. The objects should live in `historyArchiver.go` and `visibilityArchiver.go`, respectively. #### Update provider Update the `GetHistoryArchiver` and `GetVisibilityArchiver` methods of the `archiverProvider` object in the [/common/archiver/provider/provider.go](https://github.com/temporalio/temporal/blob/main/common/archiver/provider/provider.go) file so that it knows how to create an instance of your archiver. #### Add configs Add configs for your archiver to the `config/development.yaml` file and then modify the [HistoryArchiverProvider](https://github.com/temporalio/temporal/blob/main/common/config/config.go#L376) and [VisibilityArchiverProvider](https://github.com/temporalio/temporal/blob/main/common/config/config.go#L393) structs in `/common/common/config.go` accordingly. #### Custom archiver FAQ **If my custom Archive method can automatically be retried by the caller, how can I record and access progress between retries?** Handle this situation by using `ArchiverOptions`. Here is an example: ```go func(a * Archiver) Archive(ctx context.Context, URI string, request * ArchiveRequest, opts...ArchiveOption) error { featureCatalog: = GetFeatureCatalog(opts...) // this function is defined in options.go var progress progress // Check if the feature for recording progress is enabled. if featureCatalog.ProgressManager != nil { if err: = featureCatalog.ProgressManager.LoadProgress(ctx, & prevProgress); err != nil { // log some error message and return error if needed. } } // Your archiver implementation... // Record current progress if featureCatalog.ProgressManager != nil { if err: = featureCatalog.ProgressManager.RecordProgress(ctx, progress); err != nil { // log some error message and return error if needed. } } } ``` **If my `Archive` method encounters an error that is non-retryable, how do I indicate to the caller that it should not retry?** ```go func(a * Archiver) Archive(ctx context.Context, URI string, request * ArchiveRequest, opts...ArchiveOption) error { featureCatalog: = GetFeatureCatalog(opts...) // this function is defined in options.go err: = youArchiverImpl() if nonRetryableErr(err) { if featureCatalog.NonRetryableError != nil { return featureCatalog.NonRetryableError() // when the caller gets this error type back it will not retry anymore. } } } ``` **How does my history archiver implementation read history?** The archiver package provides a utility called [HistoryIterator](https://github.com/temporalio/temporal/blob/main/common/archiver/historyIterator.go) which is a wrapper of [ExecutionManager](https://github.com/temporalio/temporal/blob/main/common/persistence/data_interfaces.go#L1014). `HistoryIterator` is more simple than the `HistoryManager`, which is available in the BootstrapContainer, so archiver implementations can choose to use it when reading Workflow histories. See the [historyIterator.go](https://github.com/temporalio/temporal/blob/main/common/archiver/history_iterator.go) file for more details. Use the [filestore historyArchiver implementation](https://github.com/temporalio/temporal/tree/main/common/archiver/filestore) as an example. **Should my archiver define its own error types?** Each archiver is free to define and return its own errors. However, many common errors that exist between archivers are already defined in [common/archiver/constants.go](https://github.com/temporalio/temporal/blob/main/common/archiver/constants.go). **Is there a generic query syntax for the visibility archiver?** Currently, no. But this is something we plan to do in the future. As for now, try to make your syntax similar to the one used by our advanced list Workflow API. - [s3store](https://github.com/temporalio/temporal/tree/main/common/archiver/s3store#visibility-query-syntax) - [gcloud](https://github.com/temporalio/temporal/tree/main/common/archiver/gcloud#visibility-query-syntax) --- ## Temporal Platform's production readiness checklist This page describes common challenges customers face who self-host Temporal and it shares recommendations to mitigate those issues. Temporal at its core is about durability and reliability. To ensure this durability and reliability, a Temporal Service must be deployed according to best practices. This guide provides a path to demonstrate that Temporal consumers can be confident in a Temporal Service and provides a list of key tests you as a user should perform against the service. ## Self-Hosting Challenge Areas Significant engineering and ongoing effort is required to resolve several potential challenges: - Scalability with spiky or growing workloads - Global hosting - Uptime, availability and reliability - Management and control plane - Latency, which must be kept low and consistent - [Security](/self-hosted-guide/security) - Maintenance and upgrades - Expert support to users of the service - Cost management Each of these components is an essential part of building a mission critical Temporal Service. Without demonstrated architectural durability, the value of Temporal's [Durable Execution](https://temporal.io/how-it-works) model is compromised. ## Scalability with Variable or Growing Workloads {#scaling-and-metrics} Workloads can be highly variable, and you may experience sustained workload spikes. Temporal recommends scaling your clusters to well above the average throughput. See [Scaling Temporal: The Basics](https://temporal.io/blog/scaling-temporal-the-basics) for an introduction to the topic. Temporal server throughput is often limited by the number of [Shards](/temporal-service/temporal-server#history-shard) configured for the Temporal Service. A Shard is an unit within a Temporal Service by which concurrent Workflow Execution throughput can be scaled. Shard capacity, and often overall cluster throughput, is set at build time for a cluster and that cluster setting cannot be adjusted later. Adding more Shards if needed requires a cluster rebuild, and a migration to the new cluster. The requirements of your Temporal Service will vary widely based on your intended production workload. You will want to run your own proof of concept tests and watch for key metrics to understand the system health and scaling needs. **Load testing.** You can use [the Omes benchmarking tool](https://github.com/temporalio/omes/), see how we ourselves [stress test Temporal](https://temporal.io/blog/temporal-deep-dive-stress-testing/), or write your own. All metrics emitted by the server are [listed in Temporal's source](https://github.com/temporalio/temporal/blob/main/common/metrics/defs.go). There are also equivalent metrics that you can configure from the client side. At a high level, you will want to track these 3 categories of metrics: - **Service metrics**: For each request made by the service handler we emit `service_requests`, `service_errors`, and `service_latency` metrics with `type`, `operation`, and `namespace` tags. This gives you basic visibility into service usage and allows you to look at request rates across services, namespaces and even operations. - **Persistence metrics**: The Server emits `persistence_requests`, `persistence_errors` and `persistence_latency` metrics for each persistence operation. These metrics include the `operation` tag such that you can get the request rates, error rates or latencies per operation. These are super useful in identifying issues caused by the database. - **Workflow Execution stats**: The Server also emits counters for when Workflow Executions are complete. These are useful in getting overall stats about Workflow Execution completions. Use `workflow_success`, `workflow_failed`, `workflow_timeout`, `workflow_terminate` and `workflow_cancel` counters for each type of Workflow Execution completion. These include the `namespace` tag. ## Availability A high level of availability and reliability (99.99%) is a requirement for mission critical deployments. Temporal recommends testing for this availability level while load testing. We also recommend validating this level of reliability while doing server upgrades, to ensure no loss of service availability. Temporal Clusters can be deployed in as many regions as needed to meet various requirements: - Data Residency - Latency - Security / Isolation This can multiply the effort to implement and maintain clusters. [Temporal Cloud is available in various cloud provider regions](/cloud/service-availability). ## Management and Control Plane Temporal success leads to larger Temporal deployments. Needs can increase, and can go from having one or two production use cases in a single region to many use cases in many regions. Running multiple Temporal Services is complex work, as each needs its own setup, tuning, and configuration. Needing to monitor and manage all your Temporal Services in a unified way leads to operational management pain. Consider adding a layer on top of Temporal to manage multiple Temporal Services: a control plane. A control plane manages and directs data flow, deciding where data packets should be sent. A Temporal Service data plan can streamline operations and improve efficiency. Since Temporal does not ship its own open source data plane, rolling your own can be complex and take effort to add. Temporal Cloud provides exactly that support. With Temporal Cloud, all Namespaces in all regions can be managed from a single view. [Temporal Cloud](https://temporal.io/cloud) also has RBAC functionality that can delegate responsibilities for individual Namespaces. Self-hosted Temporal does not support RBAC or audit logging out of the box. Temporal Cloud provides RBAC and SSO support, audit logging, data encryption, third party penetration test validation, and SOC 2-Type II and HIPAA compliance. ## Maintenance and Upgrades Temporal recommends keeping up-to-date and not falling behind on your server versions. Temporal Server is proactively updated, and releases as often as every two weeks. Temporal recommends [upgrading sequentially](/self-hosted-guide/upgrade-server), not skipping any minor versions, although you can skip patch versions. No support is guaranteed for Temporal Server, but very old servers will be hard for even the community to support, so we encourage you to keep up to date. You must create and maintain the infrastructure to host and run your self-hosted Temporal installation, such as Kubernetes, as well as data stores for persistence. Server upgrades can negatively affect self-hosted Temporal Service availability. Temporal recommends load and availability testing during the upgrade process to understand the performance implications. Temporal Cloud updates are managed by the Temporal Cloud team; Cloud upgrades are seamless. ## Expert Support Temporal recommends that customer platform teams who are building out a Temporal service gain deep experience across the lifecycle and breadth of a Temporal application. Specific activities include: - [Worker tuning](/develop/worker-performance) - [Worker best practices](/workers) - Code reviews - Design guidance - Training - Code reviews - Security reviews - [Metrics](/references/sdk-metrics) and monitoring - Technical onboarding [Temporal support](/cloud/support) provides guidance on all of the above. ## Cost Management Running a mission critical, global Temporal Service can be expensive. Temporal Server is a complex system to run and scale. Temporal recommends performance testing and planning scaling as your performance requirements evolve. Following our guidance can oversize your self-hosted Temporal Server installs, but this is necessary to handle unpredictable spiky workloads. Performance testing can help you right-size your environments. Running mission critical Temporal as a Service requires multiple Temporal Clusters for high availability and global coverage. It is a good practice to have trained, experienced administrators familiar with Temporal Service architecture to maintain your Temporal servers and provide a mission critical service. Staffing, training and skill development can be significant costs to maintaining a Temporal Service. [Temporal Cloud](https://temporal.io/cloud) can be significantly less expensive to set up and scale. --- ## Self-hosted Temporal Service defaults :::info Looking for Temporal Cloud defaults? See the [Temporal Cloud defaults and limits page](/cloud/limits) ::: This page details many of the defaults coded into the Temporal Platform that can produce errors and warnings. Errors are hard limits that fail when reached. Warnings are soft limits that produce a warning log on the server side. :::info These limits might apply specifically to each Workflow Execution and do not pertain to the entire Temporal Platform or individual Namespaces. ::: - **Identifiers:** By default, the maximum length for identifiers (such as Workflow Id, Workflow Type, and Task Queue name) is 1000 characters. - This is configurable with the `limit.maxIDLength` dynamic config variable, set to 255 in [this SQL example](https://github.com/temporalio/docker-compose/blob/93d382ef9133e4cde8ce311de5153cd0cc9fbd0c/dynamicconfig/development-sql.yaml#L1-L2). - The character format is UTF-8. - **gRPC:** gRPC has a limit of 4 MB for [each message received](https://github.com/grpc/grpc/blob/v1.36.2/include/grpc/impl/codegen/grpc_types.h#L466). - **Event batch size:** The `DefaultTransactionSizeLimit` limit is [4 MB](https://github.com/temporalio/temporal/pull/1363). This is the largest transaction size allowed for the persistence of Event Histories. - **Blob size limit** for Payloads (including Workflow context and each Workflow and Activity argument and return value; _[source](https://github.com/temporalio/temporal/blob/v1.7.0/service/frontend/service.go#L133-L134)_): - Temporal warns at 256 KB: `Blob size exceeds limit.` - Temporal errors at 2 MB: `ErrBlobSizeExceedsLimit: Blob data size exceeds limit.` - Refer to [Troubleshoot blob size limit error](/troubleshooting/blob-size-limit-error). - **Workflow Execution Update limits**: - A single Workflow Execution can have a maximum of 10 in-flight Updates and 2000 total Updates in History. - **History total size limit** (leading to a terminated Workflow Execution): - Temporal warns at 10 MB: [history size exceeds warn limit](https://github.com/temporalio/temporal/blob/v1.7.0/service/history/workflowExecutionContext.go#L1238). - Temporal errors at 50 MB: [history size exceeds error limit](https://github.com/temporalio/temporal/blob/v1.7.0/service/history/workflowExecutionContext.go#L1204). - This is configurable with [HistorySizeLimitError and HistorySizeLimitWarn](https://github.com/temporalio/temporal/blob/v1.7.0/service/history/configs/config.go#L380-L381). - **History total count limit** (leading to a terminated Workflow Execution): - Temporal warns after 10,240 Events: [history size exceeds warn limit](https://github.com/temporalio/temporal/blob/v1.7.0/service/history/workflowExecutionContext.go#L1238). - Temporal errors after 51,200 Events: [history size exceeds error limit](https://github.com/temporalio/temporal/blob/v1.7.0/service/history/workflowExecutionContext.go#L1204). - This is configurable with [HistoryCountLimitError and HistoryCountLimitWarn](https://github.com/temporalio/temporal/blob/v1.7.0/service/history/configs/config.go#L382-L383). - **Concurrent limit** - The following Commands are limited: - `ScheduleActivityTask` - `SignalExternalWorkflowExecution` - `RequestCancelExternalWorkflowExecution` - `StartChildWorkflowExecution` - These will fail if the concurrent pending count exceeds 2,000. For optimal performance, limit concurrent operations to 500 or fewer. This reduces Workflow's Event History size and decreases the loading time in the Web UI. - As of v1.21, the open source Temporal Service has a default limit of 2,000 pending Activities, Child Workflows, Signals, or Workflow cancellation requests, but you can override the limits in the dynamic configuration using these variables: - `limit.numPendingActivities.error` - `limit.numPendingSignals.error` - `limit.numPendingCancelRequests.error` - `limit.numPendingChildExecutions.error` - By default, [Batch jobs](/cli/batch) are limited to one job at a time. - [Custom Search Attributes limits](/search-attribute#custom-search-attribute-limits) For details on dynamic configuration keys, see [Dynamic configuration reference](/references/dynamic-configuration). --- ## Deploying a Temporal Service There are many ways to self-host a [Temporal Service](/temporal-service). The right way for you depends entirely on your use case and where you plan to run it. For step-by-step guides on deploying and configuring Temporal, refer to our [Infrastructure tutorials](https://learn.temporal.io/tutorials/infrastructure/). ### Minimum requirements The Temporal Server depends on a database. Supported databases include the following: - [Apache Cassandra](/self-hosted-guide/visibility#cassandra) - [MySQL](/self-hosted-guide/visibility#mysql) - [PostgreSQL](/self-hosted-guide/visibility#postgresql) - [SQLite](/self-hosted-guide/visibility#sqlite) ### Docker & Docker Compose You can run a Temporal Service in [Docker](https://docs.docker.com/engine/install) containers using [Docker Compose](https://docs.docker.com/compose/install). If you have Docker and Docker Compose installed, all you need to do is clone the [temporalio/docker-compose](https://github.com/temporalio/docker-compose) repo and run the `docker compose up` command from its root. The `temporalio/docker-compose` repo comes loaded with a variety of configuration templates that enable you to try all three databases that the Temporal Platform supports (PostgreSQL, MySQL, Cassandra). It also enables you to try [Advanced Visibility](/visibility#advanced-visibility) using [Search Attributes](/search-attribute), emit metrics, and even play with the [Archival](/temporal-service/archival) feature. The Docker images in this repo are produced using the Temporal Server [auto-setup.sh](https://github.com/temporalio/docker-builds/blob/main/docker/auto-setup.sh) script. This script defaults to creating images that run all the Temporal Server services in a single process. You can use this script as a starting point for producing your own images. The following commands start and run a Temporal Service in Docker using the default configuration: ```bash git clone https://github.com/temporalio/docker-compose.git cd docker-compose docker compose up ``` Local [Temporal Clients](/encyclopedia/temporal-sdks#temporal-client) and [Workers](/workers) can connect to the Temporal Service running in Docker at 127.0.0.1:7233 (default connection for most SDKs) and the Temporal Web UI at 127.0.0.1:8080. To try other configurations (different dependencies and databases), or to try a custom Docker image, follow the [temporalio/docker-compose README](https://github.com/temporalio/docker-compose/blob/main/README.md). ### Temporal Server binaries You can run a complete Temporal Server by deploying just two Go binaries -- the [core Temporal Server](https://github.com/temporalio/temporal/releases/), and the [Temporal UI Server](https://github.com/temporalio/ui-server/releases). Refer to our [tutorial site](https://learn.temporal.io/) for more details on how to deploy Temporal binaries behind an [Nginx reverse proxy](https://learn.temporal.io/tutorials/infrastructure/nginx-sqlite-binary/) or an [Envoy edge proxy](https://learn.temporal.io/tutorials/infrastructure/envoy-sqlite-binary/). Each service can also be deployed separately. For example, if you are using Kubernetes, you could have one service per pod, so they can scale independently in the future. In Docker, you could run each service in its own container, using the `SERVICES` flag to specify the service: ```bash docker run # persistence/schema setup flags omitted -e SERVICES=history \ -- Spin up one or more: history, matching, worker, frontend -e LOG_LEVEL=debug,info \ -- Logging level -e DYNAMIC_CONFIG_FILE_PATH=config/foo.yaml -- Dynamic config file to be watched temporalio/server: ``` The environment variables supported by the Temporal Docker images are documented [on Docker Hub](https://hub.docker.com/r/temporalio/auto-setup). Each Temporal Server release ships an [Auto Setup](https://temporal.io/blog/auto-setup) Docker image that includes an [auto-setup.sh](https://github.com/temporalio/docker-builds/blob/main/docker/auto-setup.sh) script. We recommend using this script for initial schema setup of each supported database. ### Importing the Server package The Temporal Server is a standalone Go application that can be [imported](/references/server-options) into another project. You might want to do this to pass custom plugins or any other customizations through the [Server Options](/references/server-options). Then you can build and run a binary that contains your customizations. This requires Go v1.19 or later, as specified in the Temporal Server [Build prerequisites](https://github.com/temporalio/temporal/blob/main/CONTRIBUTING.md#build-prerequisites). ### Helm charts [Temporal Helm charts](https://github.com/temporalio/helm-charts) enable you to get a Temporal Service running on [Kubernetes](https://kubernetes.io/) by deploying the Temporal Server services to individual pods and connecting them to your existing database and Elasticsearch instances. The Temporal Helm charts repo contains [extensive documentation](https://github.com/temporalio/helm-charts/blob/main/README.md) about Kubernetes deployments. --- ## Self-hosted Temporal Service guide Welcome to the self-hosted Temporal Service guide. This guide shows you how to self-host open source infrastructure software that orchestrates your durable applications. :::info Sign up for Temporal Cloud! Instead of self-hosting, you can use [Temporal Cloud](/cloud). ::: :::info Getting started with Temporal? If you are just getting started with Temporal, we recommend our [introductory tutorials and courses](https://learn.temporal.io) ::: :::info Building an app? If you are building a new Temporal Application, you might only need a [development server](/cli#start-dev-server) available through the [Temporal CLI](/cli). Check out the [dev guide](/develop) for application development best practices. ::: - [Deployment](/self-hosted-guide/deployment) - [Defaults](/self-hosted-guide/defaults) - [Production checklist](/self-hosted-guide/production-checklist) - [Namespaces](/self-hosted-guide/namespaces) - [Security](/self-hosted-guide/security) - [Monitoring](/self-hosted-guide/monitoring) - [Visibility](/self-hosted-guide/visibility) - [Data encryption](/production-deployment/data-encryption) - [Upgrading server](/self-hosted-guide/upgrade-server#upgrade-server) - [Archival](/self-hosted-guide/archival) - [Multi-Cluster Replication](/self-hosted-guide/multi-cluster-replication) - [Temporal Nexus](/production-deployment/self-hosted-guide/nexus) --- ## Monitor Temporal Platform metrics The Temporal Service and SDKs emit metrics that can be used to monitor performance and troubleshoot issues. You can relay these metrics to any monitoring and observability platform. This guide will provide an example of configuring [Prometheus](https://prometheus.io/) and [Grafana](https://grafana.com/) to work with the observability metrics emitted from Temporal. This solution can work on its own, or serve as a baseline for you to further customize and integrate with other observability tooling. For example, it is also possible to use the [OpenTelemetry Collector](https://temporal.io/code-exchange/temporal-opentelemetry) in your stack instead of scraping metrics directly with Prometheus, or [Datadog](#datadog) as a frontend instead of Grafana. This configuration assumes that you have [Docker](https://www.docker.com/) installed and are running a [Temporal dev server](https://temporal.io/setup/start-development-server) via the CLI. ## Prometheus This section discusses exporting metrics from Temporal SDKs, and setting up Prometheus to collect metrics on Temporal Service, Temporal Client, and Temporal Worker performance. The Temporal Service and SDKs emit all metrics by default. However, you must enable Prometheus in your application code (using the Temporal SDKs) and your Temporal Service configuration to collect the metrics emitted from your SDK and Temporal Service. First, you'll need to create a `prometheus.yml` configuration file with some target ports to collect metrics from. Here is a sample with one Temporal Service metrics target and two Temporal Worker (SDK) metrics targets: ``` global: scrape_interval: 10s scrape_configs: - job_name: 'temporalmetrics' metrics_path: /metrics scheme: http static_configs: # Temporal Service metrics target - targets: - 'host.docker.internal:8000' labels: group: 'server-metrics' # Local app targets (set in SDK code) - targets: - 'host.docker.internal:8077' - 'host.docker.internal:8078' labels: group: 'sdk-metrics' ``` In this example, Prometheus is configured to scrape at 10-second intervals and to listen for Temporal Service metrics on `host.docker.internal:8000` and SDK metrics on two targets, `host.docker.internal:8077` and `host.docker.internal:8078`. The `8077` and `8078` ports must be set on `WorkflowServiceStubs` in your application code with your preferred SDK -- there is an example of this in the next section. You can set up as many targets as required. :::info For further Prometheus configuration options, refer to the [Prometheus documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/). ::: You can use Docker to run the official Prometheus image with this configuration: ```bash docker run -p 9090:9090 -v /path/to/prometheus.yml /etc/prometheus/prometheus.yml prom/prometheus ``` Next, launch your Temporal dev server from the CLI with an additional `--metrics-port 8000` parameter: ```bash temporal server start-dev --metrics-port 8000 ``` :::info Refer to the [Temporal Cluster configuration reference](/references/configuration#global) to expose metrics from a production service. ::: You should now have both Prometheus and a Temporal Service running locally, with Temporal providing Service metrics to Prometheus. Next, you'll want to configure SDK metrics as well. ### SDK metrics setup SDK metrics are emitted by Temporal Workers and other Clients, and must be configured in your application code. The Metrics section in the Observability guide details how to create hooks for all supported SDKs: - [Go](/develop/go/observability#metrics) - [Java](/develop/java/observability#metrics) - [PHP](/develop/php/observability) - [Python](/develop/python/observability#metrics) - [TypeScript](/develop/typescript/observability#metrics) - [.NET](/develop/dotnet/observability#metrics) - [Ruby](/develop/ruby/observability#metrics) For end-to-end examples of how to expose metrics from each SDK, see the metrics samples: - [Go SDK Sample](https://github.com/temporalio/samples-go/tree/main/metrics) - [Java SDK Sample](https://github.com/temporalio/samples-java/tree/main/core/src/main/java/io/temporal/samples/metrics) - [Python SDK Sample](https://github.com/temporalio/samples-python/tree/main/prometheus) - [TypeScript SDK Sample](https://github.com/temporalio/samples-typescript/tree/main/interceptors-opentelemetry) - [.NET SDK Sample](https://github.com/temporalio/samples-dotnet/tree/main/src/OpenTelemetry) Some of these may require you to set different metrics port numbers based on the Prometheus example here, which is configured to scrape port `8077` and `8078` by default. Follow the instructions from each of the samples to run Workflows and begin emitting metrics. This will allow you to populate a dashboard in the next section and understand how to further customize Temporal observability for your needs. ### Verifying Prometheus configuration Once your Workflows are running and emitting metrics, you can visit [http://localhost:9090/targets](http://localhost:9090/targets) on your local Prometheus instance to verify that it is able to scrape the provided endpoints. ![Prometheus scrape targets](/img/observability/prometheus-targets.png) This example shows a response from the server metrics endpoint, provided by the Temporal dev server, and two SDK metrics endpoints, as defined in the Prometheus configuration. To create this example, we used the Go and Python metrics samples, running on port 8077 and 8088 respectively. If you are not pushing data to exactly 3 metrics endpoints, your environment may be different. Next, you can visit the [local Prometheus query endpoint](http://localhost:9090/query) to manually run [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) queries on your exported metrics, or proceed to the next section to configure Grafana to generate dashboards from those metrics. ## Grafana With [Prometheus](#prometheus) configured, deploy Grafana as a metrics frontend, and configure it to use Prometheus as a data source. As before, you can use Docker to run the official Grafana image: ```bash docker run -d -p 3000:3000 grafana/grafana-enterprise ``` This will deploy a Grafana instance with a default username and password of `admin`/`admin`. In production, you would want to [configure authentication](https://grafana.com/docs/grafana/latest/setup-grafana/configure-security/configure-authentication/generic-oauth/) and control port access to Grafana. :::info For more information on how to customize your Grafana setup, see the [Grafana documentation](https://grafana.com/docs/grafana/latest/setup-grafana/). ::: Next, configure Grafana to use Prometheus as the data source. To do this, click on "Add new data source" from the "Connections" menu in the Grafana sidebar, and add Prometheus from the list. You will be prompted to add additional configuration parameters. If you are following this guide using Docker, use `http://host.docker.internal:9090` as the Prometheus address. This is a [DNS name provided by Docker Desktop](https://docs.docker.com/desktop/features/networking/#use-cases-and-workarounds) which resolves to the internal IP address used by the host machine, and allows you to connect applications across Docker containers without additional configuration rules. This is the only parameter you will need to set for your Prometheus configuration. After providing it, scroll down to the "Save and Test" button, and you can validate Prometheus as a data source for this Grafana instance. ![Grafana data sources](/img/observability/grafana-data-sources.png) In this example, Grafana is set to pull metrics from Prometheus at the port 9090, as defined in the Prometheus configuration. Now, you'll just need to add some of our provided dashboards for visualizing Temporal metrics. ### Dashboard setup We provide community-driven Grafana dashboards that can be used for monitoring Temporal Server and SDK metrics in a [dashboards](https://github.com/temporalio/dashboards/) repo. Follow the instructions in that repo's README to import the dashboards to Grafana. This way, you can create at least one dashboard for monitoring server metrics: ![Grafana server metrics](/img/observability/grafana-server-metrics.png) And at least one other dashboard for monitoring SDK metrics: ![Grafana SDK metrics](/img/observability/grafana-sdk-metrics.png) :::info You can provide additional queries in your dashboard to report other data as needed. For more details on configuring Grafana dashboards, see the [Grafana Dashboards documentation](https://grafana.com/docs/grafana/latest/dashboards/). ::: From here, you can configure Grafana [Alerts](https://grafana.com/docs/grafana/latest/alerting/) for any monitored parameters, add custom metrics to your Temporal SDK code, and use these observability features to help scale your Temporal deployment. Refer to the [Cluster metrics](/references/cluster-metrics) and [SDK metrics](/references/sdk-metrics) reference for more. ## Configuring Temporal Service health checks {#health-checks} The [Frontend Service](/temporal-service/temporal-server#frontend-service) supports TCP or [gRPC](https://github.com/grpc/grpc/blob/875066b61e3b57af4bb1d6e36aabe95a4f6ba4f7/src/proto/grpc/health/v1/health.proto#L45) health checks on port 7233. If you use [Nomad](https://www.nomadproject.io/) to manage your containers, the [check stanza](https://developer.hashicorp.com/nomad/docs/job-specification/check) would look like this for TCP: ``` service { check { type = "tcp" port = 7233 interval = "10s" timeout = "2s" } ``` or like this for gRPC (requires Consul ≥ `1.0.5`): ``` service { check { type = "grpc" port = 7233 interval = "10s" timeout = "2s" } ``` ## Installing via Helm Chart If you are installing and running Temporal via [Helm chart](https://github.com/temporalio/helm-charts), you can also [provide additional parameters](https://github.com/temporalio/helm-charts?tab=readme-ov-file#exploring-metrics-via-grafana) to populate and explore a Grafana dashboard out of the box. ## Datadog {#datadog} Datadog has a Temporal integration for collecting Temporal Service metrics. Once you've [configured Prometheus](#prometheus), you can configure the [Datadog Agent](https://docs.datadoghq.com/integrations/temporal/). If you are using [Temporal Cloud](/cloud/overview), you can also [integrate Datadog directly](https://docs.datadoghq.com/integrations/temporal-cloud/), without needing to use Prometheus. --- ## Self-hosted Multi-Cluster Replication Multi-Cluster Replication is a feature which asynchronously replicates Workflow Executions from active Clusters to other passive Clusters, for backup and state reconstruction. When necessary, for higher availability, Cluster operators can failover to any of the backup Clusters. Temporal's Multi-Cluster Replication feature is considered **experimental** and not subject to normal [versioning and support policy](/temporal-service/temporal-server#versions-and-support). Temporal automatically forwards Start, Signal, and Query requests to the active Cluster. This feature must be enabled through a Dynamic Config flag per [Global Namespace](/global-namespace). When the feature is enabled, Tasks are sent to the Parent Task Queue partition that matches that Namespace, if it exists. All Visibility APIs can be used against active and standby Clusters. This enables [Temporal UI](https://docs.temporal.io/web-ui) to work seamlessly for Global Namespaces. Applications making API calls directly to the Temporal Visibility API continue to work even if a Global Namespace is in standby mode. However, they might see a lag due to replication delay when querying the Workflow Execution state from a standby Cluster. #### Namespace Versions A _version_ is a concept in Multi-Cluster Replication that describes the chronological order of events per Namespace. With Multi-Cluster Replication, all Namespace change events and Workflow Execution History events are replicated asynchronously for high throughput. This means that data across clusters is **not** strongly consistent. To guarantee that Namespace data and Workflow Execution data will achieve eventual consistency (especially when there is a data conflict during a failover), a **version** is introduced and attached to Namespaces. All Workflow Execution History entries generated in a Namespace will also come with the version attached to that Namespace. All participating Clusters are pre-configured with a unique initial version and a shared version increment: - `initial version < shared version increment` When performing failover for a Namespace from one Cluster to another Cluster, the version attached to the Namespace will be changed by the following rule: - for all versions which follow `version % (shared version increment) == (active cluster's initial version)`, find the smallest version which has `version >= old version in namespace` When there is a data conflict, a comparison will be made and Workflow Execution History entries with the highest version will be considered the source of truth. When a cluster is trying to mutate a Workflow Execution History, the version will be checked. A cluster can mutate a Workflow Execution History only if the following is true: - The version in the Namespace belongs to this cluster, i.e. `(version in namespace) % (shared version increment) == (this cluster's initial version)` - The version of this Workflow Execution History's last entry (event) is equal or less than the version in the Namespace, i.e. `(last event's version) <= (version in namespace)`
Namespace version change example Assuming the following scenario: - Cluster A comes with initial version: 1 - Cluster B comes with initial version: 2 - Shared version increment: 10 T = 0: Namespace α is registered, with active Cluster set to Cluster A ``` namespace α's version is 1 all workflows events generated within this namespace, will come with version 1 ``` T = 1: namespace β is registered, with active Cluster set to Cluster B ``` namespace β's version is 2 all workflows events generated within this namespace, will come with version 2 ``` T = 2: Namespace α is updated to with active Cluster set to Cluster B ``` namespace α's version is 2 all workflows events generated within this namespace, will come with version 2 ``` T = 3: Namespace β is updated to with active Cluster set to Cluster A ``` namespace β's version is 11 all workflows events generated within this namespace, will come with version 11 ```
#### Version history Version history is a concept which provides a high level summary of version information in regards to Workflow Execution History. Whenever there is a new Workflow Execution History entry generated, the version from Namespace will be attached. The Workflow Executions's mutable state will keep track of all history entries (events) and the corresponding version.
Version history example (without data conflict) - Cluster A comes with initial version: 1 - Cluster B comes with initial version: 2 - Shared version increment: 10 T = 0: adding event with event ID == 1 & version == 1 View in both Cluster A & B ``` | -------- | --------------- | --------------- | ------- | | Events | Version History | | | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 1 | 1 | | -------- | ------------- | --------------- | ------- | ``` T = 1: adding event with event ID == 2 & version == 1 View in both Cluster A & B ``` | -------- | --------------- | --------------- | ------- | | Events | Version History | | | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 2 | 1 | | 2 | 1 | | | | -------- | ------------- | --------------- | ------- | ``` T = 2: adding event with event ID == 3 & version == 1 View in both Cluster A & B ``` | -------- | --------------- | --------------- | ------- | | Events | Version History | | | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 3 | 1 | | 2 | 1 | | | | 3 | 1 | | | | -------- | ------------- | --------------- | ------- | ``` T = 3: Namespace failover triggered, Namespace version is now 2 adding event with event ID == 4 & version == 2 View in both Cluster A & B ``` | -------- | --------------- | --------------- | ------- | | Events | Version History | | | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 3 | 1 | | 2 | 1 | 4 | 2 | | 3 | 1 | | | | 4 | 2 | | | | -------- | ------------- | --------------- | ------- | ``` T = 4: adding event with event ID == 5 & version == 2 View in both Cluster A & B ``` | -------- | --------------- | --------------- | ------- | | Events | Version History | | | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 3 | 1 | | 2 | 1 | 5 | 2 | | 3 | 1 | | | | 4 | 2 | | | | 5 | 2 | | | | -------- | ------------- | --------------- | ------- | ```
Since Temporal is AP, during failover (change of active Temporal Cluster Namespace), there can exist cases where more than one Cluster can modify a Workflow Execution, causing divergence of Workflow Execution History. Below shows how the version history will look like under such conditions.
Version history example (with data conflict) Below, shows version history of the same Workflow Execution in 2 different Clusters. - Cluster A comes with initial version: 1 - Cluster B comes with initial version: 2 - Cluster C comes with initial version: 3 - Shared version increment: 10 T = 0: View in both Cluster B & C ``` | -------- | --------------- | --------------- | ------- | | Events | Version History | | | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 2 | 1 | | 2 | 1 | 3 | 2 | | 3 | 2 | | | | -------- | ------------- | --------------- | ------- | ``` T = 1: adding event with event ID == 4 & version == 2 in Cluster B ``` | -------- | --------------- | --------------- | ------- | | Events | Version History | | | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 2 | 1 | | 2 | 1 | 4 | 2 | | 3 | 2 | | | | 4 | 2 | | | | -------- | ------------- | --------------- | ------- | ``` T = 1: namespace failover to Cluster C, adding event with event ID == 4 & version == 3 in Cluster C ``` | -------- | --------------- | --------------- | ------- | | Events | Version History | | | | -------- | --------------- | --------------- | ------- | | Event ID | Event Version | Event ID | Version | | -------- | ------------- | --------------- | ------- | | 1 | 1 | 2 | 1 | | 2 | 1 | 3 | 2 | | 3 | 2 | 4 | 3 | | 4 | 3 | | | | -------- | ------------- | --------------- | ------- | ``` T = 2: replication task from Cluster C arrives in Cluster B Note: below is a tree structures ``` | ------------- | ------------- | | Events | | | -------------- | ------------- | | Event ID | Event Version | | ------------- | ------------- | | 1 | 1 | | 2 | 1 | | 3 | 2 | | ------------- | ------------- | | | | | ------------- | ------------- | | | | | -------------- | ------------- | | -------- | ------------- | | Event ID | Event Version | | Event ID | Event Version | | ------------- | ------------- | | -------- | ------------- | | 4 | 2 | | 4 | 3 | | -------------- | ------------- | | -------- | ------------- | | --------------- | ----------- | | Version History | | | --------------- | ------------ | | Event ID | Version | | --------------- | ------------ | | 2 | 1 | | 3 | 2 | | --------------- | ------------ | | --------------- | ----------- | | --------------- | ------- | | Event ID | Version | | Event ID | Version | | -------- | ------- || --------------- | ------- | | 4 | 2 | | 4 | 3 | | --- | --- || --------------- | ------- | ``` T = 2: replication task from Cluster B arrives in Cluster C, same as above
#### Conflict resolution When a Workflow Execution History diverges, proper conflict resolution is applied. In Multi-cluster Replication, Workflow Execution History Events are modeled as a tree, as shown in the second example in [Version History](#version-history). Workflow Execution Histories that diverge will have more than one history branch. Among all history branches, the history branch with the highest version is considered the `current branch` and the Workflow Execution's mutable state is a summary of the current branch. Whenever there is a switch between Workflow Execution History branches, a complete rebuild of the Workflow Execution's mutable state will occur. Temporal Multi-Cluster Replication relies on asynchronous replication of Events across Clusters, so in the case of a failover it is possible to have an Activity Task dispatched again to the newly active Cluster due to a replication task lag. This also means that whenever a Workflow Execution is updated after a failover by the new Cluster, any previous replication tasks for that Execution cannot be applied. This results in loss of some progress made by the Workflow Execution in the previous active Cluster. During such conflict resolution, Temporal re-injects any external Events like Signals in the new Event History before discarding replication tasks. Even though some progress could roll back during failovers, Temporal provides the guarantee that Workflow Executions won't get stuck and will continue to make forward progress. Activity Execution completions are not forwarded across Clusters. Any outstanding Activities will eventually time out based on the configuration. Your application should have retry logic in place so that the Activity gets retried and dispatched again to a Worker after the failover to the new Cluster. Handling this is similar to handling an Activity Task timeout caused by a Worker restarting. #### Zombie Workflows There is an existing contract that for any Namespace and Workflow Id combination, there can be at most one run (Namespace + Workflow Id + Run Id) open / executing. Multi-cluster Replication aims to keep the Workflow Execution History as up-to-date as possible among all participating Clusters. Due to the nature of Multi-cluster Replication (for example, Workflow Execution History events are replicated asynchronously) different Runs (same Namespace and Workflow Id) can arrive at the target Cluster at different times, sometimes out of order, as shown below: ``` | ------------- | | ------------- | | ------------- | | Cluster A | | Network Layer | | Cluster B | | --------- || ------------- | | ------------- | | | | | Run 1 Replication Events | | | -----------------------> | | | | | | Run 2 Replication Events | | | -----------------------> | | | | | | | | | | | | | Run 2 Replication Events | | | -----------------------> | | | | | | Run 1 Replication Events | | | -----------------------> | | | | | --- || ------------- | | ------------- | | Cluster A | | Network Layer | | Cluster B | | --------- || ------------- | | ------------- | ``` Because Run 2 appears in Cluster B first, Run 1 cannot be replicated as "runnable" due to the rule `at most one Run open` (see above), thus the "zombie" Workflow Execution state is introduced. A "zombie" state is one in which a Workflow Execution which cannot be actively mutated by a Cluster (assuming the corresponding Namespace is active in this Cluster). A zombie Workflow Execution can only be changed by a replication Task. Run 1 will be replicated similar to Run 2, except when Run 1's execution will become a "zombie" before Run 1 reaches completion. #### Workflow Task processing In the context of Multi-cluster Replication, a Workflow Execution's mutable state is an entity which tracks all pending tasks. Prior to the introduction of Multi-cluster Replication, Workflow Execution History entries (events) are from a single branch, and the Temporal Server will only append new entries (events) to the Workflow Execution History. After the introduction of Multi-cluster Replication, it is possible that a Workflow Execution can have multiple Workflow Execution History branches. Tasks generated according to one history branch may become invalidated by switching history branches during conflict resolution. Example: T = 0: task A is generated according to Event Id: 4, version: 2 ``` | -------- | ------------- | | Events | | -------- | ------------- | | Event ID | Event Version | | -------- | ------------- | | 1 | 1 | | 2 | 1 | | 3 | 2 | | -------- | ------------- | | | | | | -------- | ------------- | | Event ID | Event Version | | -------- | ------------- | | 4 | 2 | <-- task A belongs to this event | | -------- | ------------- | ``` T = 1: conflict resolution happens, Workflow Execution's mutable state is rebuilt and history Event Id: 4, version: 3 is written down to persistence ``` | ------------- | -------------- | | Events | | ------------- | -------------- | | Event ID | Event Version | | ------------- | -------------- | | 1 | 1 | | 2 | 1 | | 3 | 2 | | ------------- | -------------- | | --------------| -------------- | |----------| ----------------- | | Event ID | Event Version | | Event ID | Event Version | | -------- | ------------- || -------- | ----------------- | | 4 | 2 | <-- task A belongs to this event | 4 | 3 | <-- current branch / mutable state | | --- | --- || -------- | ----------------- | ``` T = 2: task A is loaded. At this time, due to the rebuild of a Workflow Execution's mutable state (conflict resolution), Task A is no longer relevant (Task A's corresponding Event belongs to non-current branch). Task processing logic will verify both the Event Id and version of the Task against a corresponding Workflow Execution's mutable state, then discard task A. ## How to set up Multi-Cluster Replication {#set-up-multi-cluster-replication} The [Multi-Cluster Replication](/self-hosted-guide/multi-cluster-replication) feature asynchronously replicates Workflow Execution Event Histories from active Clusters to other passive Clusters, and can be enabled by setting the appropriate values in the `clusterMetadata` section of your configuration file. 1. `enableGlobalNamespace` must be set to `true`. 2. `failoverVersionIncrement` has to be equal across connected Clusters. 3. `initialFailoverVersion` in each Cluster has to assign a different value. No equal value is allowed across connected Clusters. After the above conditions are satisfied, you can start to configure a multi-cluster setup. #### Set up Multi-Cluster Replication prior to v1.14 You can set this up with [`clusterMetadata` configuration](/references/configuration#clustermetadata); however, this is meant to be only a conceptual guide rather than a detailed tutorial. :::tip If you need help when setting up, please reach out to our [community Slack](https://temporalio.slack.com). Good places to start include the **\#support-community** channel, searching through previous conversations, and asking our unusually excellent Temporal-trained large language model (visit **\#ask-ai**). Need a Slack invitation? Here's an [invitation link](https://temporal.io/slack). ::: For example: ```yaml --- # cluster A clusterMetadata: enableGlobalNamespace: true failoverVersionIncrement: 100 masterClusterName: "clusterA" currentClusterName: "clusterA" clusterInformation: clusterA: enabled: true initialFailoverVersion: 1 rpcAddress: "127.0.0.1:7233" clusterB: enabled: true initialFailoverVersion: 2 rpcAddress: "127.0.0.1:8233" --- # cluster B clusterMetadata: enableGlobalNamespace: true failoverVersionIncrement: 100 masterClusterName: "clusterA" currentClusterName: "clusterB" clusterInformation: clusterA: enabled: true initialFailoverVersion: 1 rpcAddress: "127.0.0.1:7233" clusterB: enabled: true initialFailoverVersion: 2 rpcAddress: "127.0.0.1:8233" ``` #### Set up Multi-Cluster Replication in v1.14 and later You still need to set up local cluster [`clusterMetadata` configuration](/references/configuration#clustermetadata) For example: ```yaml --- # cluster A clusterMetadata: enableGlobalNamespace: true failoverVersionIncrement: 100 masterClusterName: "clusterA" currentClusterName: "clusterA" clusterInformation: clusterA: enabled: true initialFailoverVersion: 1 rpcAddress: "127.0.0.1:7233" --- # cluster B clusterMetadata: enableGlobalNamespace: true failoverVersionIncrement: 100 masterClusterName: "clusterB" currentClusterName: "clusterB" clusterInformation: clusterB: enabled: true initialFailoverVersion: 2 rpcAddress: "127.0.0.1:8233" ``` Then you can use the Temporal CLI tool to add cluster connections. All operations should be executed in both Clusters. {/* tctl -address 127.0.0.1:7233 admin cluster upsert-remote-cluster --frontend_address "localhost:8233" */} {/* tctl -address 127.0.0.1:8233 admin cluster upsert-remote-cluster --frontend_address "localhost:7233" */} {/* tctl -address 127.0.0.1:7233 admin cluster upsert-remote-cluster --frontend_address "localhost:8233" --enable_connection false tctl -address 127.0.0.1:8233 admin cluster upsert-remote-cluster --frontend_address "localhost:7233" --enable_connection false */} {/* tctl -address 127.0.0.1:7233 admin cluster remove-remote-cluster --cluster "clusterB" tctl -address 127.0.0.1:8233 admin cluster remove-remote-cluster --cluster "clusterA" */} {/* THIS MUST BE CHECKED FOR ACCURACY */} ```shell --- # Add a cluster temporal operator cluster upsert --frontend_address="127.0.2.1:8233" --- # Disable connections temporal operator cluster upsert --frontend_address="localhost:8233" --enable_connection false --- # Delete connections temporal operator cluster remove --name="someClusterName" ``` --- ## Managing Namespaces :::info Open source Temporal This page covers namespace operations for **open source Temporal**. For core namespace concepts, see [Temporal Namespace](/namespaces). For Temporal Cloud, see [Temporal Cloud Namespaces](/cloud/namespaces). ::: A [Namespace](/namespaces) is a unit of isolation within the Temporal Platform. Before you can run Workflows, you must register at least one Namespace with your Temporal Service. ## Create a Namespace Registering a Namespace creates it on the Temporal Service. When you register a Namespace, you must set a [Retention Period](/temporal-service/temporal-server#retention-period) that determines how long closed Workflow execution history is kept. You can create Namespaces using: - **Temporal CLI** (recommended): [`temporal operator namespace create`](/cli/operator#create) - **Go SDK**: [`RegisterNamespace`](/develop/go/namespaces#register-namespace) - **Java SDK**: [`RegisterNamespace`](/develop/java/namespaces#register-namespace) - **TypeScript SDK**: [Namespace management](/develop/typescript/namespaces#register-namespace) ### The default Namespace If no Namespace is specified, SDKs and CLI use the `default` Namespace. You must register this Namespace before using it. When deploying with Docker Compose or the [auto-setup image](https://github.com/temporalio/docker-builds/blob/main/docker/auto-setup.sh), the `default` Namespace is created automatically. When deploying with [Helm charts](https://github.com/temporalio/helm-charts), create it manually: ```bash temporal operator namespace create --namespace default ``` Namespace registration takes up to 15 seconds to complete. Wait for this process to finish before making calls to the Namespace. ## Manage Namespaces Common namespace management operations: | Operation | CLI Command | Description | |-----------|-------------|-------------| | List | [`temporal operator namespace list`](/cli/operator#list) | List all registered Namespaces | | Describe | [`temporal operator namespace describe`](/cli/operator#describe) | Get details for a Namespace | | Update | [`temporal operator namespace update`](/cli/operator#update) | Update Namespace configuration | | Delete | [`temporal operator namespace delete`](/cli/operator#delete) | Delete a Namespace and all its data | For SDK-based namespace management: - [Go SDK namespace management](/develop/go/namespaces#manage-namespaces) - [Java SDK namespace management](/develop/java/namespaces#manage-namespaces) - [TypeScript SDK namespace management](/develop/typescript/namespaces#manage-namespaces) ### Deprecate vs Delete - **Deprecate**: Prevents new Workflow Executions from starting, but existing Workflows continue to run. - **Delete**: Removes the Namespace and all Workflow Executions. This is irreversible. ## Security Use a custom [Authorizer](/self-hosted-guide/security#authorizer-plugin) on your Frontend Service to control who can create, update, or deprecate Namespaces. Without an Authorizer configured, Temporal uses the `nopAuthority` authorizer that allows all API calls unconditionally. For Temporal Cloud, [role-based access controls](/cloud/users#namespace-level-permissions) provide namespace-level authorization without custom configuration. ## Best practices For namespace naming conventions, organizational patterns, and production safeguards, see [Namespace Best Practices](/best-practices/managing-namespace). --- ## Server Frontend API Reference While it's usually easiest to interact with [Temporal Server](/temporal-service/temporal-server) via a [Client SDK](/encyclopedia/temporal-sdks#temporal-client) or the [Temporal CLI](https://docs.temporal.io/cli), you can also use its gRPC API. ## gRPC API Our Client and Worker SDKs use the gRPC API. The API reference is located here: [`api-docs.temporal.io`](https://api-docs.temporal.io/) ### Use with code Usually you interact with the API via high-level methods like `client.workflow.start()`. However, Client SDKs also expose the underlying gRPC services. For instance, the TypeScript SDK has: - WorkflowService: [`Client.connection.workflowService`](https://typescript.temporal.io/api/classes/client.Connection#workflowservice) - OperatorService: [`Client.connection.operatorService`](https://typescript.temporal.io/api/classes/client.Connection/#operatorservice) - HealthService: [`Client.connection.healthService`](https://typescript.temporal.io/api/classes/client.Connection/#healthservice) If you're not using an SDK Client (rare), you can generate gRPC client stubs by: - Cloning [`temporalio/api`](https://github.com/temporalio/api) (repo with the protobuf files) - Generating code in [your language](https://grpc.io/docs/languages/) ### Use manually To query the API manually via command line or a GUI, first: - If you don't already have a Server to connect to, run [`temporal server start-dev`](/cli/server#start-dev) - Clone this repo: ```shell git clone https://github.com/temporalio/api.git cd api ``` #### With command line Install [`evans`](https://github.com/ktr0731/evans#installation). ```shell cd /path/to/api evans --proto temporal/api/workflowservice/v1/service.proto --port 7233 ``` To connect to Temporal Cloud, set the host, cert, cert key, and TLS flag: ```shell evans --proto temporal/api/workflowservice/v1/service.proto --host devrel.a2dd6.tmprl.cloud --port 7233 --tls --cert /Users/me/certs/temporal.pem --certkey /Users/me/certs/temporal.key ``` Once inside the evans prompt, you can run commands like `help`, `show service` to list available methods, and `call ListWorkflowExecutions`. #### With a GUI - Install [BloomRPC](https://github.com/bloomrpc/bloomrpc#installation). - Open the app - Select "Import Paths" button on the top-left and enter the path to the cloned repo: `/path/to/api` - Select the "Import protos" + button and select this file: ```shell /path/to/api/temporal/api/workflowservice/v1/service.proto ``` - A list of methods should appear in the sidebar. Select one. - Edit the JSON in the left pane. - Hit `Cmd/Ctrl-Enter` or click the play button to get a response from the server on the right. One downside compared to [command line](#with-command-line) is it doesn't show enum names, just numbers like `"task_queue_type": 1`. ## HTTP API The Web UI uses [`temporalio/ui-server`](https://github.com/temporalio/ui-server), an HTTP proxy for the gRPC API. :::caution As soon as [this HTTP API proposal](https://github.com/temporalio/proposals/pull/79) is implemented, it will be the recommended HTTP API, at which point the `ui-server` API may be discontinued. Further, `ui-server` was designed for use in the UI, and may make breaking changes. ::: To view the API docs, run [`temporal server start-dev`](/cli#start-dev-server) and open: [`localhost:8233/openapi/`](http://localhost:8233/openapi/) To make a request, run: ````sh $ curl localhost:8233/api/v1/namespaces/default/workflows { "executions": [ { "execution": { "workflowId": "workflow-_homozdkzYWLRpX6Rfou5", "runId": "c981cb26-baa4-4af8-ac5f-866451d3f83c" }, "type": { "name": "example" }, "startTime": ... }, ... ], "nextPageToken": null } ``` */} ```` --- ## Self-hosted Temporal Nexus :::tip SUPPORT, STABILITY, and DEPENDENCY INFO Temporal Nexus is now [Generally Available](/evaluate/development-production-features/release-stages#general-availability). Learn why you should use Nexus in the [evaluation guide](/evaluate/nexus). ::: [Temporal Nexus](/nexus) allows you to reliably connect Temporal Applications. It was designed with Durable Execution in mind and enables each team to have their own Namespace for improved modularity, security, debugging, and fault isolation. ## Enable Nexus Enable Nexus in your self-hosted Temporal Service by updating the server's static configuration file and enabling Nexus through dynamic config, then setting the public callback URL and allowed callback addresses. Nexus is only supported in single cluster setups at this time. For additional information on operating Nexus workloads in your self-hosted cluster, see [Nexus Architecture](https://github.com/temporalio/temporal/blob/main/docs/architecture/nexus.md). :::note Replace `$PUBLIC_URL` with a URL value that's accessible to external callers or internally within the cluster. Currently, external Nexus calls are considered experimental so it should be safe to use the address of an internal load balancer for the Frontend Service. ::: To enable Nexus in your deployment: 1. Ensure that the server's static configuration file enables the HTTP API. ```yaml services: frontend: rpc: # NOTE: keep other fields as they were httpPort: 7243 clusterMetadata: # NOTE: keep other fields as they were clusterInformation: active: # NOTE: keep other fields as they were httpAddress: $PUBLIC_URL:7243 ``` 2. Enable Nexus through dynamic config, set the public callback URL, and set the allowed callback addresses. ```yaml system.enableNexus: - value: true component.nexusoperations.callback.endpoint.template: # The URL must be publicly accessible if the callback is meant to be called by external services. # When using Nexus for cross namespace calls, the URL's host is irrelevant as the address is resolved using # membership. The URL is a Go template that interpolates the `NamepaceName` and `NamespaceID` variables. - value: https://$PUBLIC_URL:7243/namespaces/{{.NamespaceName}}/nexus/callback component.callbacks.allowedAddresses: # This list is a security mechanism for limiting which callback URLs are accepted by the server. # Attackers may leverage the callback mechanism to force the server to call arbitrary URLs. # The config below is only recommended for development, tune this to your requirements. - value: - Pattern: "*" AllowInsecure: true ``` ## Build and use Nexus Services Nexus has a familiar programming model to build and use Nexus Services using the Temporal SDK. The [Nexus Operation lifecycle](/nexus/operations#operation-lifecycle) supports both synchronous and asynchronous Operations. Nexus Operations can be implemented with Temporal primitives, like a Workflow, or execute arbitrary code. :::tip RESOURCES - [Go SDK - Nexus quick start and code sample](/develop/go/nexus) - [Java SDK - Nexus quick start and code sample](/develop/java/nexus) ::: ## Learn more - [Evaluate](/evaluate/nexus) why you should use Nexus and watch the [Nexus keynote and demo](https://youtu.be/qqc2vsv1mrU?feature=shared&t=2082). - [Learn key Nexus concepts](/nexus) and how Nexus works in the [Nexus deep dive talk](https://www.youtube.com/watch?v=izR9dQ_eIe4&t=934s) - Explore [additional resources](/evaluate/nexus#learn-more) to learn more about Nexus. --- ## Upgrade the Temporal Server ## How to upgrade the Temporal Server version {#upgrade-server} If a newer version of the [Temporal Server](/temporal-service/temporal-server) is available, a notification appears in the Temporal Web UI. :::info If you are using a version that is older than 1.0.0, reach out to us at [community.temporal.io](http://community.temporal.io) to ask how to upgrade. ::: First check to see if an upgrade to the database schema is required for the version you wish to upgrade to. If a database schema upgrade is required, it will be called out directly in the [release notes](https://github.com/temporalio/temporal/releases). Some releases require changes to the schema, and some do not. We ensure that any consecutive versions are compatible in terms of database schema upgrades, features, and system behavior; however there is no guarantee that there is compatibility between _any_ two non-consecutive versions. ### Key considerations When upgrading the Temporal Server, there are two key considerations to keep in mind: 1. **Sequential Upgrades:** Temporal Server should be upgraded sequentially. That is, if you're on version \(v1.n.x\), your next upgrade should be to \(v1.n+1.x\) or the closest available subsequent version. This sequence should be repeated until your desired version is reached. 2. **Data Compatibility:** During an upgrade, the Temporal Server either updates or restructures the existing version data to match the data format of the newer version. Temporal Server ensures backward compatibility only between two successive minor versions. Consequently, skipping versions during an upgrade may lead to older data formats becoming unreadable. If the previous data format cannot be interpreted and converted to the newer format, the upgrade process will be unsuccessful. ### Step-by-Step Upgrade Procedure: Upgrading the Temporal Server requires a methodical approach to ensure data integrity, compatibility, and seamless transition between versions. The following documentation outlines the step-by-step process to successfully upgrade your Temporal Server. When upgrading your Temporal Server version, ensure that you upgrade sequentially. 1. **Upgrade Database Schema:** Before initiating the Temporal Server upgrade, use one of the recommended upgrade tools to update your database schema. This ensures it is aligned with the version of Temporal Server you aim to upgrade to. 2. **Upgrade Temporal Server:** Once the database schema is updated, proceed to upgrade the Temporal Server deployment to the next sequential version. 3. **Iterative Upgrades** (optional): Continue this process (steps 1 and 2) iteratively until you reach the desired Temporal Server version. By adhering to the above guidelines and following the step-by-step procedure, you can ensure a smooth and successful upgrade of your Temporal Server. The Temporal Server upgrade updates or rewrites the old version data with the format introduced in the newer version. Because Temporal Server guarantees backward compatibility between two consecutive minor versions, and because older versions of the code are eventually removed from the code base, skipping versions when upgrading might cause older formats to become unrecognizable. If the old format of the data can't be read to be rewritten to the new format, the upgrades fail. Check the [Temporal Server releases](https://github.com/temporalio/temporal/releases) and follow these releases in order. You can skip patch versions; use the latest patch of a minor version when upgrading. Also, be aware that each upgrade requires the History Service to load all Shards and update the Shard metadata, so allow approximately 10 minutes on each version for these processes to complete before upgrading to the next version. Use one of the upgrade tools to upgrade your database schema to be compatible with the Temporal Server version being upgraded to. If you are using a schema tools version prior to Temporal Server v1.8.0, we strongly recommend _never_ using the "dryrun" (`-y`, or `--dryrun`) options in any of your schema update commands. Using this option might lead to potential loss of data, as when using it will create a new database and drop your existing one. This flag was removed in the 1.8.0 release. ### Upgrade Cassandra schema If you are using Cassandra for your Temporal Service's persistence, use the `temporal-cassandra-tool` to upgrade both the default Persistence and Visibility schemas. **Example default schema upgrade:** ```bash temporal_v1.2.1 $ temporal-cassandra-tool \ --tls \ --tls-ca-file <...> \ --user \ --password \ --endpoint \ --keyspace temporal \ --timeout 120 \ update \ --schema-dir ./schema/cassandra/temporal/versioned ``` **Example visibility schema upgrade:** ```bash temporal_v1.2.1 $ temporal-cassandra-tool \ --tls \ --tls-ca-file <...> \ --user \ --password \ --endpoint \ --keyspace temporal_visibility \ --timeout 120 \ update \ --schema-dir ./schema/cassandra/visibility/versioned ``` ### Upgrade PostgreSQL or MySQL schema If you are using MySQL or PostgreSQL use the `temporal-sql-tool`, which works similarly to the `temporal-cassandra-tool`. Refer to this [Makefile](https://github.com/temporalio/temporal/blob/v1.4.1/Makefile#L367-L383) for context. #### PostgreSQL **Example default schema upgrade:** ```bash ./temporal-sql-tool \ --tls \ --tls-enable-host-verification \ --tls-cert-file \ --tls-key-file \ --tls-ca-file \ --ep localhost -p 5432 -u temporal -pw temporal --pl postgres --db temporal update-schema -d ./schema/postgresql/v96/temporal/versioned ``` **Example visibility schema upgrade:** ```bash ./temporal-sql-tool \ --tls \ --tls-enable-host-verification \ --tls-cert-file \ --tls-key-file \ --tls-ca-file \ --ep localhost -p 5432 -u temporal -pw temporal --pl postgres --db temporal_visibility update-schema -d ./schema/postgresql/v96/visibility/versioned ``` If you're upgrading PostgreSQL to v12 or later to enable advanced Visibility features with Temporal Server v1.20, upgrade your PostgreSQL version first, and then run `temporal-sql-tool` with the `postgres12` plugin, as shown in the following example: ```bash ./temporal-sql-tool \ --tls \ --tls-enable-host-verification \ --tls-cert-file \ --tls-key-file \ --tls-ca-file \ --ep localhost -p 5432 -u temporal -pw temporal --pl postgres12 --db temporal_visibility update-schema -d ./schema/postgresql/v12/visibility/versioned ``` #### MySQL **Example default schema upgrade:** ```bash ./temporal-sql-tool \ --tls \ --tls-enable-host-verification \ --tls-cert-file \ --tls-key-file \ --tls-ca-file \ --ep localhost -p 3036 -u root -pw root --pl mysql --db temporal update-schema -d ./schema/mysql/v57/temporal/versioned/ ``` **Example visibility schema upgrade:** ```bash ./temporal-sql-tool \ --tls \ --tls-enable-host-verification \ --tls-cert-file \ --tls-key-file \ --tls-ca-file \ --ep localhost -p 3036 -u root -pw root --pl mysql --db temporal_visibility update-schema -d ./schema/mysql/v57/visibility/versioned/ ``` If you're upgrading MySQL to v8.0.17 or later to enable advanced Visibility features with Temporal Server v1.20, upgrade your MySQL version first, and then run `temporal-sql-tool` with the `mysql8` plugin, as shown in the following example: ```bash ./temporal-sql-tool \ --tls \ --tls-enable-host-verification \ --tls-cert-file \ --tls-key-file \ --tls-ca-file \ --ep localhost -p 5432 -u temporal -pw temporal --pl mysql8 --db temporal_visibility update-schema -d ./schema/mysql/v8/visibility/versioned. ``` ### Roll-out technique We recommend preparing a staging Temporal Service and then do the following to verify the upgrade is successful: 1. Create some simulation load on the staging Temporal Service. 2. Upgrade the database schema in the staging Temporal Service. 3. Wait and observe for a few minutes to verify that there is no unstable behavior from both the server and the simulation load logic. 4. Upgrade the server. 5. Now do the same to the live environment Temporal Service. --- ## Self-hosted Visibility feature setup A [Visibility](/temporal-service/visibility) store is set up as a part of your [Persistence store](/temporal-service/persistence) to enable listing and filtering details about Workflow Executions that exist on your Temporal Service. A Visibility store is required in a Temporal Service setup because it is used by Temporal Web UI and CLI to pull Workflow Execution data and enables features like batch operations on a group of Workflow Executions. With the Visibility store, you can use [List Filters](/list-filter) with [Search Attributes](/search-attribute) to list and filter Workflow Executions that you want to review. Setting up [advanced Visibility](/visibility#advanced-visibility) enables access to creating and using multiple custom Search Attributes with your List Filters. For details, see [Search Attributes](/search-attribute). Note that if you use MySQL, PostgreSQL, or SQLite as your Visibility store, Temporal Server version 1.20 and later supports advanced Visibility features on MySQL (version 8.0.17 and later), PostgreSQL (version 12 and later) and SQLite (v3.31.0 and later), in addition to Elasticsearch. To enable advanced Visibility on your SQL databases, ensure that you do the following: - [Upgrade your Temporal Server](/self-hosted-guide/upgrade-server#upgrade-server) to version 1.20 or later. - [Update your database schemas](/self-hosted-guide/upgrade-server#upgrade-server) for MySQL to version 8.0.17 (or later), PostgreSQL to version 12 (or later), or SQLite to v3.31.0 (or later). Beginning with Temporal Server v1.21, you can set up a secondary Visibility store in your Temporal Service to enable [Dual Visibility](/dual-visibility). This is useful for migrating your Visibility store database. #### Supported databases The following databases are supported as Visibility stores: - [MySQL](#mysql) v5.7 and later. Use v8.0.17 (or later) with Temporal Server v1.20 or later for advanced Visibility capabilities. Because standard Visibility is deprecated beginning with Temporal Server v1.21, support for older versions of MySQL will be dropped. - [PostgreSQL](#postgresql) v9.6 and later. Use v12 (or later) with Temporal Server v1.20 or later for advanced Visibility capabilities. Because standard Visibility is deprecated beginning with Temporal Server v1.21, support for older versions of PostgreSQL will be dropped. - [SQLite](#sqlite) v3.31.0 and later for advanced Visibility capabilities. - [Cassandra](#cassandra). Support for Cassandra as a Visibility database is deprecated beginning with Temporal Server v1.21. For information on migrating from Cassandra to any of the supported databases, see [Migrating Visibility database](#migrating-visibility-database). - [Elasticsearch](#elasticsearch) supported versions. We recommend operating a Temporal Service with Elasticsearch as your Visibility store for any use case that spawns more than a few Workflow Executions. You can use any combination of the supported databases for your Persistence and Visibility stores. For updates, check [Server release notes](https://github.com/temporalio/temporal/releases). ## How to set up MySQL Visibility store {#mysql} :::tip Support, stability, and dependency info - MySQL v5.7 and later. - Support for MySQL v5.7 will be deprecated for all Temporal Server versions after v1.20. - With Temporal Server version 1.20 and later, advanced Visibility is available on MySQL v8.0.17 and later. ::: You can set MySQL as your [Visibility store](/temporal-service/visibility). Verify [supported versions](/self-hosted-guide/visibility) before you proceed. If using MySQL v8.0.17 or later as your Visibility store with Temporal Server v1.20 and later, any [custom Search Attributes](/search-attribute#custom-search-attribute) that you create must be associated with a Namespace in that Temporal Service. **Persistence configuration** Set your MySQL Visibility store name in the `visibilityStore` parameter in your Persistence configuration, and then define the Visibility store configuration under `datastores`. The following example shows how to set a Visibility store `mysql-visibility` and define the datastore configuration in your Temporal Service configuration YAML. ```yaml #... persistence: #... visibilityStore: mysql-visibility #... datastores: default: #... mysql-visibility: sql: pluginName: 'mysql8' # For MySQL v8.0.17 and later. For earlier versions, use "mysql" plugin. databaseName: 'temporal_visibility' connectAddr: ' ' # Remote address of this database; for example, 127.0.0.0:3306 connectProtocol: ' ' # Protocol example: tcp user: 'username_for_auth' password: 'password_for_auth' maxConns: 2 maxIdleConns: 2 maxConnLifetime: '1h' #... ``` For details on the configuration parameters and values, see [Temporal Service configuration](/references/configuration#sql). To enable advanced Visibility features on your MySQL Visibility store, upgrade to MySQL v8.0.17 or later with Temporal Server v1.20 or later. See [Upgrade Server](/self-hosted-guide/upgrade-server#upgrade-server) on how to upgrade your Temporal Server and database schemas. For example configuration templates, see [MySQL Visibility store configuration](https://github.com/temporalio/temporal/blob/main/config/development-mysql8.yaml). **Database schema and setup** Visibility data is stored in a database table called `executions_visibility` that must be set up according to the schemas defined (by supported versions): - [MySQL v8.0.17 and later](https://github.com/temporalio/temporal/tree/main/schema/mysql/v8/visibility) The following example shows how the [auto-setup.sh](https://github.com/temporalio/docker-builds/blob/main/docker/auto-setup.sh) script sets up your Visibility store. ```bash #... --- # set your MySQL environment variables : "${DBNAME:=temporal}" : "${VISIBILITY_DBNAME:=temporal_visibility}" : "${DB_PORT:=}" : "${MYSQL_SEEDS:=}" : "${MYSQL_USER:=}" : "${MYSQL_PWD:=}" : "${MYSQL_TX_ISOLATION_COMPAT:=false}" #... --- # set up MySQL schema setup_mysql_schema() { #... # use valid schema for the version of the database you want to set up for Visibility VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/mysql/${MYSQL_VERSION_DIR}/visibility/versioned if [[ ${SKIP_DB_CREATE} != true ]]; then temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" create fi temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" setup-schema -v 0.0 temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" update-schema -d "${VISIBILITY_SCHEMA_DIR}" #... } ``` Note that the script uses [temporal-sql-tool](https://github.com/temporalio/temporal/blob/3b982585bf0124839e697952df4bba01fe4d9543/tools/sql/main.go) to run the setup. ## How to set up PostgreSQL Visibility store {#postgresql} :::tip Support, stability, and dependency info - PostgreSQL v9.6 and later. - With Temporal Service version 1.20 and later, advanced Visibility is available on PostgreSQL v12 and later. - Support for PostgreSQL v9.6 through v11 will be deprecated for all Temporal Server versions after v1.20; we recommend upgrading to PostgreSQL 12 or later. ::: You can set PostgreSQL as your [Visibility store](/temporal-service/visibility). Verify [supported versions](/self-hosted-guide/visibility) before you proceed. If using PostgreSQL v12 or later as your Visibility store with Temporal Server v1.20 and later, any [custom Search Attributes](/search-attribute#custom-search-attribute) that you create must be associated with a Namespace in that Temporal Service. **Persistence configuration** Set your PostgreSQL Visibility store name in the `visibilityStore` parameter in your Persistence configuration, and then define the Visibility store configuration under `datastores`. The following example shows how to set a Visibility store `postgres-visibility` and define the datastore configuration in your Temporal Service configuration YAML. ```yaml #... persistence: #... visibilityStore: postgres-visibility #... datastores: default: #... postgres-visibility: sql: pluginName: 'postgres12' # For PostgreSQL v12 and later. For earlier versions, use "postgres" plugin. databaseName: 'temporal_visibility' connectAddr: ' ' # remote address of this database; for example, 127.0.0.0:5432 connectProtocol: ' ' # protocol example: tcp user: 'username_for_auth' password: 'password_for_auth' maxConns: 2 maxIdleConns: 2 maxConnLifetime: '1h' #... ``` To enable advanced Visibility features on your PostgreSQL Visibility store, upgrade to PostgreSQL v12 or later with Temporal Server v1.20 or later. See [Upgrade Server](/self-hosted-guide/upgrade-server#upgrade-server) for details on how to upgrade your Temporal Server and database schemas. **Database schema and setup** Visibility data is stored in a database table called `executions_visibility` that must be set up according to the schemas defined (by supported versions): - [PostgreSQL v12 and later](https://github.com/temporalio/temporal/tree/main/schema/postgresql/v12/visibility) The following example shows how the [auto-setup.sh](https://github.com/temporalio/docker-builds/blob/main/docker/auto-setup.sh) script sets up your PostgreSQL Visibility store. ```bash #... --- # set your PostgreSQL environment variables : "${DBNAME:=temporal}" : "${VISIBILITY_DBNAME:=temporal_visibility}" : "${DB_PORT:=}" : "${POSTGRES_SEEDS:=}" : "${POSTGRES_USER:=}" : "${POSTGRES_PWD:=}" #... set connection details --- # set up PostgreSQL schema setup_postgres_schema() { #... # use valid schema for the version of the database you want to set up for Visibility VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/postgresql/${POSTGRES_VERSION_DIR}/visibility/versioned if [[ ${VISIBILITY_DBNAME} != "${POSTGRES_USER}" && ${SKIP_DB_CREATE} != true ]]; then temporal-sql-tool --plugin postgres --ep "${POSTGRES_SEEDS}" -u "${POSTGRES_USER}" -p "${DB_PORT}" --db "${VISIBILITY_DBNAME}" create fi temporal-sql-tool --plugin postgres --ep "${POSTGRES_SEEDS}" -u "${POSTGRES_USER}" -p "${DB_PORT}" --db "${VISIBILITY_DBNAME}" update-schema -d "${VISIBILITY_SCHEMA_DIR}" #... } ``` Note that the script uses [temporal-sql-tool](https://github.com/temporalio/temporal/blob/3b982585bf0124839e697952df4bba01fe4d9543/tools/sql/main.go) to run the setup. ## How to set up SQLite Visibility store {#sqlite} :::tip Support, stability, and dependency info - SQLite v3.31.0 and later. ::: You can set SQLite as your [Visibility store](/temporal-service/visibility). Verify [supported versions](/self-hosted-guide/visibility) before you proceed. Temporal supports only an in-memory database with SQLite; this means that the database is automatically created when Temporal Server starts and is destroyed when Temporal Server stops. You can change the configuration to use a file-based database so that it is preserved when Temporal Server stops. However, if you use a file-based SQLite database, upgrading your database schema to enable advanced Visibility features is not supported; in this case, you must delete the database and create it again to upgrade. If using SQLite v3.31.0 and later as your Visibility store with Temporal Server v1.20 and later, any [custom Search Attributes](/search-attribute#custom-search-attribute) that you create must be associated with a Namespace in that Temporal Service. **Persistence configuration** Set your SQLite Visibility store name in the `visibilityStore` parameter in your Persistence configuration, and then define the Visibility store configuration under `datastores`. The following example shows how to set a Visibility store `sqlite-visibility` and define the datastore configuration in your Temporal Service configuration YAML. ```yaml persistence: # ... visibilityStore: sqlite-visibility # ... datastores: # ... sqlite-visibility: sql: user: 'username_for_auth' password: 'password_for_auth' pluginName: 'sqlite' databaseName: 'default' connectAddr: 'localhost' connectProtocol: 'tcp' connectAttributes: mode: 'memory' cache: 'private' maxConns: 1 maxIdleConns: 1 maxConnLifetime: '1h' tls: enabled: false caFile: '' certFile: '' keyFile: '' enableHostVerification: false serverName: '' ``` SQLite (v3.31.0 and later) has advanced Visibility enabled by default. **Database schema and setup** Visibility data is stored in a database table called `executions_visibility` that must be set up according to the schemas defined (by supported versions) in https://github.com/temporalio/temporal/blob/main/schema/sqlite/v3/visibility/schema.sql. For an example of setting up the SQLite schema, see [Temporalite](https://github.com/temporalio/temporalite/blob/main/server.go) setup. ## How to set up Cassandra Visibility store {#cassandra} :::tip Support, stability, and dependency info - Support for Cassandra as a Visibility database is deprecated beginning with Temporal Server v1.21. For updates, check the [Temporal Server release notes](https://github.com/temporalio/temporal/releases). - We recommend migrating from Cassandra to any of the other supported databases for Visibility. ::: You can set Cassandra as your [Visibility store](/temporal-service/visibility). Verify [supported versions](/self-hosted-guide/visibility) before you proceed. Advanced Visibility is not supported with Cassandra. To enable advanced Visibility features, use any of the supported databases, such as MySQL, PostgreSQL, SQLite, or Elasticsearch, as your Visibility store. We recommend using Elasticsearch for any Temporal Service setup that handles more than a few Workflow Executions because it supports the request load on the Visibility store and helps optimize performance. To migrate from Cassandra to a supported SQL database, see [Migrating Visibility database](#migrating-visibility-database). **Persistence configuration** Set your Cassandra Visibility store name in the `visibilityStore` parameter in your Persistence configuration, and then define the Visibility store configuration under `datastores`. The following example shows how to set a Visibility store `cass-visibility` and define the datastore configuration in your Temporal Service configuration YAML. ```yaml #... persistence: #... visibilityStore: cass-visibility #... datastores: default: #... cass-visibility: cassandra: hosts: '127.0.0.1' keyspace: 'temporal_visibility' #... ``` **Database schema and setup** Visibility data is stored in a database table called `executions_visibility` that must be set up according to the schemas defined (by supported versions) in https://github.com/temporalio/temporal/tree/main/schema/cassandra/visibility. The following example shows how the [auto-setup.sh](https://github.com/temporalio/docker-builds/blob/main/docker/auto-setup.sh) script sets up your Visibility store. ```bash #... --- # set your Cassandra environment variables : "${KEYSPACE:=temporal}" : "${VISIBILITY_KEYSPACE:=temporal_visibility}" : "${CASSANDRA_SEEDS:=}" : "${CASSANDRA_PORT:=9042}" : "${CASSANDRA_USER:=}" : "${CASSANDRA_PASSWORD:=}" : "${CASSANDRA_TLS_ENABLED:=}" : "${CASSANDRA_CERT:=}" : "${CASSANDRA_CERT_KEY:=}" : "${CASSANDRA_CA:=}" : "${CASSANDRA_REPLICATION_FACTOR:=1}" #... --- # set up Cassandra schema setup_cassandra_schema() { #... # use valid schema for the version of the database you want to set up for Visibility VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/cassandra/visibility/versioned if [[ ${SKIP_DB_CREATE} != true ]]; then temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" create -k "${VISIBILITY_KEYSPACE}" --rf "${CASSANDRA_REPLICATION_FACTOR}" fi temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" setup-schema -v 0.0 temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" update-schema -d "${VISIBILITY_SCHEMA_DIR}" #... } ``` ## How to integrate Elasticsearch into a Temporal Service {#elasticsearch} You can integrate Elasticsearch with your Temporal Service as your Visibility store. We recommend using Elasticsearch for large-scale operations on the Temporal Service. To integrate Elasticsearch with your Temporal Service, edit the `persistence` section of your `development.yaml` configuration file to add Elasticsearch as the `visibilityStore`, and run the index schema setup commands. **Persistence configuration** Set your Elasticsearch Visibility store name in the `visibilityStore` parameter in your Persistence configuration, and then define the Visibility store configuration under `datastores`. The following example shows how to set a Visibility store named `es-visibility` and define the datastore configuration in your Temporal Service configuration YAML. ```yaml persistence: ... visibilityStore: es-visibility datastores: ... es-visibility: # Define the Elasticsearch datastore connection information under the `es-visibility` key elasticsearch: version: "v7" url: scheme: "http" host: "127.0.0.1:9200" indices: visibility: temporal_visibility_v1_dev ``` **Index schema and index** The following example shows how the [auto-setup.sh](https://github.com/temporalio/docker-builds/blob/main/docker/auto-setup.sh) script sets up an Elasticsearch Visibility store. ```bash #... --- # Elasticsearch : "${ENABLE_ES:=false}" : "${ES_SCHEME:=http}" : "${ES_SEEDS:=}" : "${ES_PORT:=9200}" : "${ES_USER:=}" : "${ES_PWD:=}" : "${ES_VERSION:=v7}" : "${ES_VIS_INDEX:=temporal_visibility_v1}" : "${ES_SEC_VIS_INDEX:=}" : "${ES_SCHEMA_SETUP_TIMEOUT_IN_SECONDS:=0}" #... --- # ES_SERVER is the URL of Elasticsearch server; for example, "http://localhost:9200". SETTINGS_URL="${ES_SERVER}/_cluster/settings" SETTINGS_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/cluster_settings_${ES_VERSION}.json TEMPLATE_URL="${ES_SERVER}/_template/temporal_visibility_v1_template" SCHEMA_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/index_template_${ES_VERSION}.json INDEX_URL="${ES_SERVER}/${ES_VIS_INDEX}" curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${SETTINGS_URL}" -H "Content-Type: application/json" --data-binary "@${SETTINGS_FILE}" --write-out "\n" curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${TEMPLATE_URL}" -H 'Content-Type: application/json' --data-binary "@${SCHEMA_FILE}" --write-out "\n" curl --user "${ES_USER}":"${ES_PWD}" -X PUT "${INDEX_URL}" --write-out "\n" ``` **Elasticsearch privileges** Ensure that the following privileges are granted for the Elasticsearch Temporal index: - **Read** - [index privileges](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html#privileges-list-indices): `create`, `index`, `delete`, `read` - **Write** - [index privileges](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html#privileges-list-indices): `write` - **Custom Search Attributes** - [index privileges](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html#privileges-list-indices): `manage` - [cluster privileges](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html#privileges-list-cluster): `monitor` or `manage`. ## How to set up Dual Visibility {#dual-visibility} To enable [Dual Visibility](/dual-visibility), set up a secondary Visibility store with your primary Visibility store, and configure your Temporal Service to enable read and/or write operations on the secondary Visibility store. With Dual Visibility, you can read from only one Visibility store at a time, but can configure your Temporal Service to write to primary only, secondary only, or to both primary and secondary stores. #### Set up secondary Visibility store Set the secondary store with the `secondaryVisibilityStore` configuration key in your Persistence configuration, and then define the secondary Visibility store configuration under `datastores`. You can configure any of the [supported databases](/self-hosted-guide/visibility) as your secondary store. Examples: To configure MySQL as a secondary store with Cassandra as your primary store, do the following. ```yaml persistence: visibilityStore: cass-visibility # This is your primary Visibility store secondaryVisibilityStore: mysql-visibility # This is your secondary Visibility store datastores: cass-visibility: cassandra: hosts: '127.0.0.1' keyspace: 'temporal_primary_visibility' mysql-visibility: sql: pluginName: 'mysql8' # Verify supported versions. Use a version of SQL that supports advanced Visibility. databaseName: 'temporal_secondary_visibility' connectAddr: '127.0.0.1:3306' connectProtocol: 'tcp' user: 'temporal' password: 'temporal' ``` To configure Elasticsearch as both your primary and secondary store, use the configuration key `elasticsearch.indices.secondary_visibility`, as shown in the following example. ```yaml persistence: visibilityStore: es-visibility datastores: es-visibility: elasticsearch: version: 'v7' logLevel: 'error' url: scheme: 'http' host: '127.0.0.1:9200' indices: visibility: temporal_visibility_v1 secondary_visibility: temporal_visibility_v1_new closeIdleConnectionsInterval: 15s ``` #### Database schema and setup The database schema and setup for a secondary store depends on the database you plan to use. - [MySQL](#mysql) - [PostgresSQL](#postgresql) - [SQLite](#sqlite) - [Elasticsearch](#elasticsearch) For the Cassandra and MySQL configuration in the previous example, an example setup script would be as follows. ```bash #... --- # set your Cassandra environment variables : "${KEYSPACE:=temporal}" : "${VISIBILITY_KEYSPACE:=temporal_primary_visibility}" : "${CASSANDRA_SEEDS:=}" : "${CASSANDRA_PORT:=9042}" : "${CASSANDRA_USER:=}" : "${CASSANDRA_PASSWORD:=}" : "${CASSANDRA_TLS_ENABLED:=}" : "${CASSANDRA_CERT:=}" : "${CASSANDRA_CERT_KEY:=}" : "${CASSANDRA_CA:=}" : "${CASSANDRA_REPLICATION_FACTOR:=1}" #... --- # set up Cassandra schema setup_cassandra_schema() { #... # use valid schema for the version of the database you want to set up for Visibility VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/cassandra/visibility/versioned if [[ ${SKIP_DB_CREATE} != true ]]; then temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" create -k "${VISIBILITY_KEYSPACE}" --rf "${CASSANDRA_REPLICATION_FACTOR}" fi temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" setup-schema -v 0.0 temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" update-schema -d "${VISIBILITY_SCHEMA_DIR}" #... } #... --- # set your MySQL environment variables : "${DBNAME:=temporal}" : "${VISIBILITY_DBNAME:=temporal_secondary_visibility}" : "${DB_PORT:=}" : "${MYSQL_SEEDS:=}" : "${MYSQL_USER:=}" : "${MYSQL_PWD:=}" : "${MYSQL_TX_ISOLATION_COMPAT:=false}" #... --- # set up MySQL schema setup_mysql_schema() { #... # use valid schema for the version of the database you want to set up for Visibility VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/mysql/${MYSQL_VERSION_DIR}/visibility/versioned if [[ ${SKIP_DB_CREATE} != true ]]; then temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" create fi temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" setup-schema -v 0.0 temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" update-schema -d "${VISIBILITY_SCHEMA_DIR}" #... } ``` For Elasticsearch as both primary and secondary Visibility store configuration in the previous example, an example setup script would be as follows. ```bash #... --- # Elasticsearch : "${ENABLE_ES:=false}" : "${ES_SCHEME:=http}" : "${ES_SEEDS:=}" : "${ES_PORT:=9200}" : "${ES_USER:=}" : "${ES_PWD:=}" : "${ES_VERSION:=v7}" : "${ES_VIS_INDEX:=temporal_visibility_v1_dev}" : "${ES_SEC_VIS_INDEX:=temporal_visibility_v1_new}" : "${ES_SCHEMA_SETUP_TIMEOUT_IN_SECONDS:=0}" #... --- # Set up Elasticsearch index setup_es_index() { ES_SERVER="${ES_SCHEME}://${ES_SEEDS%%,*}:${ES_PORT}" # ES_SERVER is the URL of Elasticsearch server i.e. "http://localhost:9200". SETTINGS_URL="${ES_SERVER}/_cluster/settings" SETTINGS_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/cluster_settings_${ES_VERSION}.json TEMPLATE_URL="${ES_SERVER}/_template/temporal_visibility_v1_template" SCHEMA_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/index_template_${ES_VERSION}.json INDEX_URL="${ES_SERVER}/${ES_VIS_INDEX}" curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${SETTINGS_URL}" -H "Content-Type: application/json" --data-binary "@${SETTINGS_FILE}" --write-out "\n" curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${TEMPLATE_URL}" -H 'Content-Type: application/json' --data-binary "@${SCHEMA_FILE}" --write-out "\n" curl --user "${ES_USER}":"${ES_PWD}" -X PUT "${INDEX_URL}" --write-out "\n" # Checks for and sets up Elasticsearch as a secondary Visibility store if [[ ! -z "${ES_SEC_VIS_INDEX}" ]]; then SEC_INDEX_URL="${ES_SERVER}/${ES_SEC_VIS_INDEX}" curl --user "${ES_USER}":"${ES_PWD}" -X PUT "${SEC_INDEX_URL}" --write-out "\n" fi } ``` #### Update Temporal Service configuration With the primary and secondary stores set, update the `system.secondaryVisibilityWritingMode` and `system.enableReadFromSecondaryVisibility` configuration keys in your self-hosted Temporal Service's dynamic configuration YAML file to enable read and/or write operations to the secondary Visibility store. For example, to enable write operations to both primary and secondary stores, but disable reading from the secondary store, use the following. ```yaml system.secondaryVisibilityWritingMode: - value: 'dual' constraints: {} system.enableReadFromSecondaryVisibility: - value: false constraints: {} ``` For details on the configuration options, see: - [Secondary Visibility dynamic configuration reference](/references/dynamic-configuration#secondary-visibility-settings) - [Migrating Visibility databases](#migrating-visibility-database) ## How to migrate Visibility database {#migrating-visibility-database} To migrate your Visibility database, [set up a secondary Visibility store](#dual-visibility) to enable [Dual Visibility](/dual-visibility), and update the dynamic configuration in your Temporal Service to update the read and write operations for the Visibility store. Dual Visibility setup is optional but useful in gradually migrating your Visibility data to another database. Before you begin, verify [supported databases and versions](/self-hosted-guide/visibility) for a Visibility store. The following steps describe how to migrate your Visibility database. After you make any changes to your [Temporal Service configuration](/temporal-service/configuration), ensure that you restart your services. #### Set up secondary Visibility store 1. In your Temporal Service configuration, [add a secondary Visibility store](/references/configuration#secondaryvisibilitystore) to your Visibility setup under the Persistence configuration. Example: To migrate from Cassandra to Elasticsearch, add Elasticsearch as your secondary database and set it up. For details, see [secondary Visibility database schema and setup](#dual-visibility). ```yaml persistence: visibilityStore: cass-visibility secondaryVisibilityStore: es-visibility datastores: cass-visibility: cassandra: hosts: '127.0.0.1' keyspace: 'temporal_visibility' es-visibility: elasticsearch: version: 'v7' logLevel: 'error' url: scheme: 'http' host: '127.0.0.1:9200' indices: visibility: temporal_visibility_v1_dev closeIdleConnectionsInterval: 15s ``` 1. Update the [dynamic configuration](/temporal-service/configuration#dynamic-configuration) keys on your self-hosted Temporal Service to enable write operations to the secondary store and disable read operations. Example: ```yaml system.secondaryVisibilityWritingMode: - value: "dual" constraints: {} system.enableReadFromSecondaryVisibility: - value: false constraints: {} ``` At this point, Visibility data is read from the primary store, and all Visibility data is written to both the primary and secondary store. This setting applies only to new Visibility data generated after Dual Visibility is enabled. It does not migrate any existing data in the primary store to the secondary store. For details on write options to the secondary store, see [Secondary Visibility dynamic configuration reference](/references/dynamic-configuration#secondary-visibility-settings). #### Run in dual mode When you enable a secondary store, only new Visibility data is written to both primary and secondary stores. The primary store still holds the Workflow Execution data from before the secondary store was set up. Running in dual mode lets you plan for closed and open Workflow Executions data from before the secondary store was set up in your self-hosted Temporal Service. Example: - To manage closed Workflow Executions data, run in dual mode until the Namespace [Retention Period](/temporal-service/temporal-server#retention-period) is reached. After the Retention Period, Workflow Execution data is removed from the Persistence and Visibility stores. If you want to keep the closed Workflow Executions data after the set Retention Period, you must set up [Archival](/self-hosted-guide/archival). - To manage data for all open Workflow Executions, run in dual mode until all the Workflow Executions started before enabling Dual Visibility mode are closed. After the Workflow Executions are closed, verify the Retention Period and set up Archival if you need to keep the data beyond the Retention Period. You can run your Visibility setup in dual mode for an indefinite period, or until you are ready to deprecate the primary store and move completely to the secondary store without losing data. #### Deprecate primary Visibility store When you are ready to deprecate your primary store, follow these steps. 1. Update the dynamic configuration YAML to enable read operations from the secondary store. Example: ```yaml system.secondaryVisibilityWritingMode: - value: "dual" constraints: {} system.enableReadFromSecondaryVisibility: - value: true constraints: {} ``` At this point, Visibility data is read from the secondary store only. Verify whether data on the secondary store is correct. 1. When the secondary store is vetted and ready to replace your current primary store, change your Temporal Service configuration to set the secondary store as your primary, and remove the dynamic configuration set in the previous steps. Example: ```yaml persistence: visibilityStore: es-visibility datastores: es-visibility: elasticsearch: version: 'v7' logLevel: 'error' url: scheme: 'http' host: '127.0.0.1:9200' indices: visibility: temporal_visibility_v1_dev closeIdleConnectionsInterval: 15s ``` ## Managing custom Search Attributes {#custom-search-attributes} To manage your custom Search Attributes on Temporal Cloud, use `tcld`. With Temporal Cloud, you can create and rename custom Search Attributes. To manage your custom Search Attributes on self-hosted Temporal Clusters, use Temporal CLI. With self-hosted Temporal Service, you can create and remove custom Search Attributes. Note that if you use [SQL databases](/self-hosted-guide/visibility) with Temporal Server v1.20 and later, creating a custom Search Attribute creates a mapping with a database field name in the Visibility store `custom_search_attributes` table. Removing a custom Search Attribute removes this mapping with the database field name but does not remove the data. If you remove a custom Search Attribute and add a new one, the new custom Search Attribute might be mapped to the database field of the one that was recently removed. This might cause unexpected results when you use the List API to retrieve results using the new custom Search Attribute. These constraints do not apply if you use Elasticsearch. ### How to create custom Search Attributes {#create-custom-search-attributes} Add custom Search Attributes to your Visibility store using the Temporal CLI for a self-hosted Temporal Service and `tcld` for Temporal Cloud. Creating a custom Search Attribute in your Visibility store makes it available to use in your Workflow metadata and [List Filters](/list-filter). **On Temporal Cloud** To create custom Search Attributes on Temporal Cloud, use [`tcld namespace search-attributes add`](/cloud/tcld/namespace/#search-attributes). For example, to add a custom Search Attributes "CustomSA" to your Temporal Cloud Namespace "YourNamespace", run the following command. `tcld namespace search-attributes add --namespace YourNamespace --search-attribute "CustomSA"` **On self-hosted Temporal Service** If you're self-hosting your Temporal Service, verify whether your [Visibility database](/self-hosted-guide/visibility) version supports advanced Visibility features. To create custom Search Attributes in your self-hosted Temporal Service Visibility store, use `temporal operator search-attribute create` with `--name` and `--type` command options. For example, to create a Search Attribute called `CustomSA` of type `Keyword`, run the following command: ``` temporal operator search-attribute create --name="CustomSA" --type="Keyword" ``` Note that if you use a SQL database with advanced Visibility capabilities, you are required to specify a Namespace when creating a custom Search Attribute. For example: ``` temporal operator search-attribute create --name="CustomSA" --type="Keyword" --namespace="yournamespace" ``` You can also create multiple custom Search Attributes when you set up your Visibility store. For example, the [auto-setup.sh](https://github.com/temporalio/docker-builds/blob/main/docker/auto-setup.sh) script that is used to set up your local [docker-compose Temporal Service](https://github.com/temporalio/docker-compose) creates custom Search Attributes in the Visibility store, as shown in the following code snippet from the script (for SQL databases). ```bash add_custom_search_attributes() { until temporal operator search-attribute list --namespace "${DEFAULT_NAMESPACE}"; do echo "Waiting for namespace cache to refresh..." sleep 1 done echo "Namespace cache refreshed." echo "Adding Custom*Field search attributes." temporal operator search-attribute create --namespace "${DEFAULT_NAMESPACE}" --yes \ --name="CustomKeywordField" --type="Keyword" \ --name="CustomStringField" --type="Text" \ --name="CustomTextField" --type="Text" \ --name="CustomIntField" --type="Int" \ --name="CustomDatetimeField" --type="Datetime" \ --name="CustomDoubleField" --type="Double" \ --name="CustomBoolField" --type="Bool" } ``` Note that this script has been updated for Temporal Server v1.20, which requires associating every custom Search Attribute with a Namespace when using a SQL database. For Temporal Server v1.19 and earlier, or if using Elasticsearch for advanced Visibility, you can create custom Search Attributes without a Namespace association, as shown in the following example. {/* CHECK FOR ACCURACY */} ```bash add_custom_search_attributes() { echo "Adding Custom*Field search attributes." temporal operator search-attribute create \ --name="CustomKeywordField" --type="Keyword" \ --name="CustomStringField" --type="Text" \ --name="CustomTextField" --type="Text" \ --name="CustomIntField" --type="Int" \ --name="CustomDatetimeField" --type="Datetime" \ --name="CustomDoubleField" --type="Double" \ --name="CustomBoolField" --type="Bool" } ``` When your Visibility store is set up and running, these custom Search Attributes are available to use in your Workflow code. ### How to remove custom Search Attributes {#remove-custom-search-attributes} To remove a Search Attribute key from your self-hosted Temporal Service Visibility store, use the command `temporal operator search-attribute remove`. Removing Search Attributes is not supported on Temporal Cloud. For example, if using Elasticsearch for advanced Visibility, to remove a custom Search Attribute called `CustomSA` of type Keyword use the following command: ``` temporal operator search-attribute remove \ --name="your_custom_attribute" ``` With Temporal Server v1.20, if using a SQL database for advanced Visibility, you need to specify the Namespace in your command, as shown in the following command: ``` temporal operator search-attribute remove \ --name="your_custom_attribute" \ --namespace="your_namespace" ``` To check whether the Search Attribute was removed, run ``` temporal operator search-attribute list ``` and check the list. If you're on Temporal Server v1.20 and later, specify the Namespace from which you removed the Search Attribute. For example, ``` temporal search-attribute list --namespace="yournamespace" ``` Note that if you use [SQL databases](/self-hosted-guide/visibility) with Temporal Server v1.20 and later, a new custom Search Attribute is mapped to a database field name in the Visibility store `custom_search_attributes` table. Removing this custom Search Attribute removes the mapping with the database field name but does not remove the data. If you remove a custom Search Attribute and add a new one, the new custom Search Attribute might be mapped to the database field of the one that was recently removed. This might cause unexpected results when you use the List API to retrieve results using the new custom Search Attribute. These constraints do not apply if you use Elasticsearch. --- ## Quick Launch - Deploying your Workers on Amazon EKS Temporal Workers run in [Kubernetes](https://kubernetes.io)-based deployments deliver scale, resilience, and flexible resource management. Amazon EKS (Elastic Kubernetes Service) offers one of the most popular choices for running Temporal Workers. It integrates smoothly with AWS services and supports auto-scaling and fault tolerance—key features for many Temporal users. Follow this guide to deploy and manage your Temporal Workers in EKS. This guide walks you through writing Temporal Worker code, containerizing and publishing the Worker to the Amazon Elastic Container Registry (ECR), and deploying the worker to Amazon EKS. The example on this page uses Temporal’s Python SDK and Temporal Cloud. :::tip This guide applies to running Workers for both Temporal OSS and Temporal Cloud. However, there are some differences when working with Temporal OSS. For example, you'll need to use mTLS certificates instead of API keys. You must modify your Kubernetes deployments to handle and mount the TLS certificates for your use case. The specifics will vary depending on your deployment. ::: ## Before you begin To get started deploying your Workers to EKS, you’ll need: - Your Temporal Cloud account, including: - A Namespace using [API key authentication](/cloud/api-keys#namespace-authentication) - Your API Key for a [Service Account](/cloud/api-keys#generate-an-api-key-for-a-service-account) - An Amazon Web Services (AWS) account, including: - A deployed EKS cluster within your AWS Account - An installed version of the [`aws` CLI](https://aws.amazon.com/cli/) - [`docker`](https://www.docker.com/get-started/) - The [`kubectl`](https://kubernetes.io/docs/reference/kubectl/) command line tool, configured with your deployed EKS cluster ## Write your Worker code In Temporal applications, business logic lives within your main Workflow code. Your Worker code runs separately, and is responsible for executing your Workflows and Activities. Make sure to configure your Worker to use environment variables so you can dynamically route your Worker to different Temporal Instances, Namespaces, and Task Queues on the fly: ```python TEMPORAL_ADDRESS = os.environ.get("TEMPORAL_ADDRESS", "localhost:7233") TEMPORAL_NAMESPACE = os.environ.get("TEMPORAL_NAMESPACE", "default") TEMPORAL_TASK_QUEUE = os.environ.get("TEMPORAL_TASK_QUEUE", "test-task-queue") TEMPORAL_API_KEY = os.environ.get("TEMPORAL_API_KEY", "") ``` After configuration, instantiate your Temporal client: ``` client = await Client.connect( TEMPORAL_ADDRESS, namespace=TEMPORAL_NAMESPACE, rpc_metadata={"temporal-namespace": TEMPORAL_NAMESPACE}, api_key=TEMPORAL_API_KEY, tls=True ) ``` Here is a complete Python boilerplate that showcases how to instantiate a Client and pass it to the Worker before starting the Worker execution: ```python from temporalio.worker import Worker from temporalio.client import Client from workflows import your_workflow from activities import your_first_activity, your_second_activity, your_third_activity TEMPORAL_ADDRESS = os.environ.get("TEMPORAL_ADDRESS", "localhost:7233") TEMPORAL_NAMESPACE = os.environ.get("TEMPORAL_NAMESPACE", "default") TEMPORAL_TASK_QUEUE = os.environ.get("TEMPORAL_TASK_QUEUE", "test-task-queue") TEMPORAL_API_KEY = os.environ.get("TEMPORAL_API_KEY", "your-api-key") async def main(): client = await Client.connect( TEMPORAL_ADDRESS, namespace=TEMPORAL_NAMESPACE, rpc_metadata={"temporal-namespace": TEMPORAL_NAMESPACE}, api_key=TEMPORAL_API_KEY, tls=True ) print("Initializing worker...") # Run the worker worker = Worker( client, task_queue=TEMPORAL_TASK_QUEUE, workflows=[your_workflow], activities=[ your_first_activity, your_second_activity, your_third_activity ] ) print("Starting worker... Waiting for tasks.") await worker.run() if __name__ == "__main__": asyncio.run(main()) ``` ## Containerize the Worker for Kubernetes You need to containerize your Worker code to run it with Kubernetes. Here is a sample Python Dockerfile, complete with the Temporal Python SDK installed: ```docker --- # Use Python 3.11 slim image as base FROM python:3.11-slim --- # Install system dependencies RUN apt-get update && apt-get install -y \ gcc \ && rm -rf /var/lib/apt/lists/* --- # Install the Temporal Python SDK dependency RUN pip install --no-cache-dir temporalio --- # Set Python to run in unbuffered mode ENV PYTHONUNBUFFERED=1 --- # Run the worker CMD ["python", "worker.py"] ``` Build the Docker image and target the `linux/amd64` architecture: ```bash docker buildx build \ --platform linux/amd64 \ -t your-app . ``` ## Publish the Worker Image to Amazon ECR After building the Docker image, you’re ready to publish it to Amazon ECR. Make sure that you’re authenticated with AWS, and that you’ve set your `AWS_REGION` and `AWS_ACCOUNT_ID` environment variables: ```bash export AWS_ACCOUNT_ID= export AWS_REGION= ``` Create an ECR repository and authenticate ECR with the Docker container client: ```bash aws ecr create-repository \ --repository-name your-app aws ecr get-login-password --region $AWS_REGION | \ docker login --username AWS --password-stdin \ $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com ``` After authenticating Docker with ECR, tag your container and publish it: ```bash docker tag your-app $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/your-app:latest docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/your-app:latest ``` ## Deploy the Workers to EKS With your Worker containerized, you’re ready to deploy it to EKS. Create a namespace in your EKS cluster. You’ll use the namespace to run your Temporal Workers: ```bash kubectl create namespace your-namespace ``` Create a `ConfigMap` to hold non-sensitive values that Kubernetes will inject into your Worker deployment. These enable dynamic routing for instances, Namespaces, and Task Queues. To set these values, build a config-map.yaml file like the following example: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: temporal-worker-config namespace: temporal-system data: TEMPORAL_HOST_URL: ““ TEMPORAL_NAMESPACE: “” TEMPORAL_TASK_QUEUE: “” ``` Apply the `ConfigMap` to your namespace: ```bash kubectl apply -f config-map.yaml \ --namespace your-namespace ``` For sensitive values, use Kubernetes Secrets. Create a secret to hold your Temporal API key: ```bash kubectl create secret generic temporal-secret \ --from-literal=TEMPORAL_API_KEY=$TEMPORAL_API_KEY \ --namespace your-namespace ``` With your configuration in place, you can deploy the Worker. Create a deployment.yaml file configuring your Worker image, resources, and secret values. For common deployments, tune the resources you specify so they match your production workloads. Note that the spun-up container reads your Temporal API key from the Kubernetes secret you just created: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: your-app namespace: your-namespace labels: app: your-app spec: selector: matchLabels: app: your-app replicas: 1 template: metadata: labels: app: your-app spec: serviceAccountName: your-app containers: - name: your-app image: env: - name: TEMPORAL_ADDRESS valueFrom: configMapKeyRef: name: temporal-worker-config key: TEMPORAL_ADDRESS - name: TEMPORAL_NAMESPACE valueFrom: configMapKeyRef: name: temporal-worker-config key: TEMPORAL_NAMESPACE - name: TEMPORAL_TASK_QUEUE valueFrom: configMapKeyRef: name: temporal-worker-config key: TEMPORAL_TASK_QUEUE - name: TEMPORAL_API_KEY valueFrom: secretKeyRef: name: temporal-secret key: TEMPORAL_API_KEY resources: limits: cpu: "0.5" memory: "512Mi" requests: cpu: "0.2" memory: "256Mi" ``` Apply the deployment.yaml file to the EKS cluster: ```bash kubectl apply -f deployment.yaml \ --namespace your-namespace ``` ## Verify that the Workers are Connected After deploying your Workers to EKS, confirm that they have connected to Temporal Cloud. Retrieve the pod listing for the Kubernetes/EKS namespace that you created: ``` kubectl get pods -n temporal-system ``` After listing the pods, access the Worker logs to confirm you’re properly connected to Temporal Cloud: ``` kubectl logs -n temporal-system ``` You confirm connection when you see: ``` Initializing worker... Starting worker... Waiting for tasks. ``` You have now successfully deployed your Temporal Worker to EKS. --- ## Temporal Worker Deployments A core feature of Temporal is that you are able to deploy your Workers to any infrastructure where your Workflow and Activity code will actually run. This way, you have total control over your runtime environment, and can be responsive to any security or scaling needs that may arise over time, whether you are using Temporal Cloud or self-hosting a Temporal Service. If you are just getting started, you want more guidance, or a refresher on Temporal concepts, our [Tutorials and Courses](https://learn.temporal.io/) help by using only one or two Temporal Workers to demonstrate core functionality. Once you have an understanding of the core concepts, the content in this section will provide clarity on real-world deployments that grow far beyond those examples. Our Worker Deployments guide provides documentation of Temporal product features that make it easier to scale and revise your Workflows. [Worker Versioning](/production-deployment/worker-deployments/worker-versioning) allows you to pin Workflows to individual versions of your workers, which are called Worker Deployment Versions. You can optionally use the Temporal [Worker Controller](/production-deployment/worker-deployments/kubernetes-controller) to programmatically manage and scale your Worker deployments in Kubernetes pods. This section also covers specific Worker Deployment examples: - [**Deploy Workers to Amazon EKS**](/production-deployment/worker-deployments/deploy-workers-to-aws-eks) Containerize your Worker, publish it to Amazon Elastic Container Registry (ECR), and deploy it to Amazon Elastic Kubernetes Service (EKS) using the Temporal Python SDK. This guide covers the full deployment lifecycle and shows how to configure your Worker to connect to Temporal Cloud using Kubernetes-native tools like ConfigMaps and Secrets. Running Workers on EKS gives you fine-grained control over scaling, resource allocation, and availability—ideal for production systems that need reliability and flexibility in the cloud. --- ## Temporal Worker Controller The [Temporal Worker Controller](https://github.com/temporalio/temporal-worker-controller) provides automation to enable rainbow deployments of your Workers by simplifying the tracking of which versions still have active Workflows, managing the lifecycle of versioned Worker deployments, and calling Temporal APIs to update the routing config of Temporal Worker Deployments. The Temporal Worker Controller makes it simple and safe to deploy Temporal Workers on Kubernetes. ### Why adopt the Worker Controller? The traditional approach to revising Temporal Workflows is to add branches using the [Versioning APIs](/workflow-definition#workflow-versioning). Over time these checks can become a source of technical debt, as safely removing them from a codebase is a careful process that often involves querying all running Workflows. [Worker Versioning](/production-deployment/worker-deployments/worker-versioning) is a Temporal feature that allows you to pin Workflows to individual versions of your Workers, which are called Worker Deployment Versions. Using pinning, you will not need to add branching to your Workflows to avoid non-determinism errors. This allows you to bypass the other Versioning APIs. The Worker Controller gives you direct, programmatic control over your Worker deployments, and integrates with the [Temporal CLI](/production-deployment/worker-deployments/worker-versioning#rolling-out-changes-with-the-cli). You do not need to use the Worker Controller to use Worker Versioning, but when used together, Worker Versioning and the Worker Controller can provide more graceful deployments and upgrades, and less need to manually tune your Workers. Note that in Temporal, **Worker Deployment** is sometimes referred to as **Deployment**, but since the Worker Controller makes significant references to Kubernetes Deployment resource, within this page we will stick to these terms: - [**Worker Deployment**](/worker-versioning#deployments): A Worker Deployment is a logical service that groups similar Workers together for unified management. Each Deployment has a name (such as your service name) and supports versioning through a series of Worker Deployment Versions. - [**Worker Deployment Version**](/worker-versioning#deployment-versions): A Worker Deployment Version represents an iteration of a Worker Deployment. Each Deployment Version consists of Workers that share the same code build and environment. When a Worker starts polling for Workflow and Activity Tasks, it reports its Deployment Version to the Temporal Server. - **Deployment**: A Kubernetes Deployment resource. A Deployment is "versioned" if it is running versioned Temporal workers/pollers. ### Features - Registration of new Temporal Worker Deployment Versions - Creation of versioned Deployment resources (that manage the Pods that run your Temporal pollers) - Deletion of resources associated with drained Worker Deployment Versions - `Manual`, `AllAtOnce`, and `Progressive` rollouts of new versions - Ability to specify a "gate" Workflow that must succeed on the new version before routing real traffic to that version - [Autoscaling](/develop/worker-performance#recommended-approach) of versioned Deployments Refer to the [Temporal Worker Controller repo](https://github.com/temporalio/temporal-worker-controller/) for usage details. ## Configuring Worker Lifecycles To use the Temporal Worker Controller, tag your Workers following the guidance for using [Worker Versioning](/production-deployment/worker-deployments/worker-versioning). Here is an example of a progressive rollout strategy gated on the success of the `HelloWorld` Workflow: ``` rollout: strategy: Progressive steps: - rampPercentage: 1 pauseDuration: 30s - rampPercentage: 10 pauseDuration: 1m gate: workflowType: "HelloWorld" ``` As you ship new deployment versions, the Worker Controller automatically detects them and gradually makes that version the new **Current Version** of the Worker deployment it is a part of. As older pinned Workflows finish executing and deprecated deployment versions become drained, the Worker Controller also frees up resources by sunsetting the `Deployment` resources polling those versions. ## Running the Temporal Worker Controller You can install the Temporal Worker Controller using our Helm chart: ```bash RELEASE=temporal-worker-controller NAMESPACE=temporal-system VERSION=1.0.0 helm install $RELEASE oci://docker.io/temporalio/helm-charts/temporal-worker-controller \ --version $VERSION \ --namespace $NAMESPACE \ --create-namespace helm install temporal-worker-controller ./helm/temporal-worker-controller \ --namespace $NAMESPACE \ --create-namespace ``` Refer to [GitHub](https://github.com/temporalio/temporal-worker-controller/tree/main/helm/temporal-worker-controller/templates) for other Worker Controller deployment templates. --- ## Worker Versioning(Worker-deployments) Worker Versioning is a Temporal feature that allows you to confidently deploy new changes to the Workflows running on your Workers without breaking them. Temporal enables this by helping you manage different builds or versions, formally called [Worker Deployment Versions](/worker-versioning#deployment-versions). Worker Versioning unlocks important benefits for users of [blue-green or rainbow deployments](#deployment-systems). - Ramping traffic gradually to a new Worker Deployment Version. - Verifying a new Deployment Version with tests before sending production traffic to it. - Instant rollback when you detect that a new Deployment Version is broken. - Improved error rates when adopting it. In addition, Worker Versioning introduces **Workflow Pinning**. For pinned Workflow Types, each execution runs entirely on the Worker Deployment Version where it started. You need not worry about making breaking code changes to running, pinned Workflows. To use Workflow Pinning, we recommend using [rainbow deployments](#deployment-systems). :::tip Watch this Temporal Replay 2025 talk to learn more about Worker Versioning and see a demo. ::: :::note Worker Versioning is currently available in Public Preview. Minimum versions: - Go SDK version [v1.35.0](https://github.com/temporalio/sdk-go/releases/tag/v1.35.0) - Python [v1.11](https://github.com/temporalio/sdk-python/releases/tag/1.11.0) - Java [v1.29](https://github.com/temporalio/sdk-java/releases/tag/v1.29.0) - Typescript [v1.12](https://github.com/temporalio/sdk-typescript/releases/tag/v1.12.0) - .NET [v1.7.0](https://github.com/temporalio/sdk-dotnet/releases/tag/1.7.0) - Ruby [v0.5.0](https://github.com/temporalio/sdk-ruby/releases/tag/v0.5.0) - Other SDKs: coming soon! Self-hosted users: - Minimum Temporal CLI version [v1.4.1](https://github.com/temporalio/cli/releases/tag/v1.4.1) - Minimum Temporal Server version: [v1.29.1](https://github.com/temporalio/temporal/releases/tag/v1.29.1) - Minimum Temporal UI Version [v2.38.0](https://github.com/temporalio/ui/releases/tag/v2.38.0) ::: ## Getting Started with Worker Versioning {#definition} To get started with Worker Versioning, you should understand some concepts around versioning and deployments. - A **Worker Deployment** is a deployment or service across multiple versions. In a rainbow deployment, more than two active Deployment Versions can run at once. - A **Worker Deployment Version** is a version of a deployment or service. It can have multiple Workers polling on multiple Task Queues, but they all run the same build. - A **Build ID**, in combination with a Worker Deployment name, identifies a single Worker Deployment Version. - When a versioned worker polls on a task queue, that task queue becomes part of that Worker's version. That version's Worker Deployment controls how the task queue matches Workflow Tasks with Workers. - Using **Workflow Pinning**, you can declare each Workflow type to have a **Versioning Behavior**, either Pinned or Auto-Upgrade. - A **Pinned** Workflow is guaranteed to complete on a single Worker Deployment Version. - An **Auto-Upgrade** Workflow will automatically move to a new code version as you roll it out, specifically its Target Worker Deployment Version (defined below). Therefore, Auto-Upgrade Workflows are not restricted to a single Deployment Version and need to be kept replay-safe manually, i.e. with [patching](/workflow-definition#workflow-versioning). - Both Pinned and Auto-Upgrade Workflows are guaranteed to start only on the Current or Ramping Version of their Worker Deployment. - Pinned Workflows are designed for use with rainbow deployments. See [Deployment Systems](#deployment-systems). - Pinned Workflows don't need to be patched, as they run on the same worker and build until they complete. - If you expect your Workflow to run longer than you want your Worker Deployment Versions to exist, you should mark your Workflow Type as Auto-Upgrade. - Each Worker Deployment has a single [**Current Version**](/worker-versioning#versioning-definitions) which is where Workflows are routed to unless they were previously pinned on a different version. - Each Worker Deployment can have a [**Ramping Version**](/worker-versioning#versioning-definitions) which is where a configurable percentage of Workflows are routed to unless they were previously pinned on a different version. - For a given Workflow, its [**Target Worker Deployment Version**](/worker-versioning#versioning-definitions) is the version it will move to next. ## Setting up your deployment system {#deployment-systems} If you haven't already, you'll want to pick a container deployment solution for your Workers. You also need to pick among three common deployment strategies: - A **rolling deployment** strategy upgrades Workers in place with little control over how quickly they cut over and only a slow ability to roll Workers back. Rolling deploys have minimal footprint but tend to provide lower availability than the other strategies and are incompatible with Worker Versioning. - A **blue-green deployment** strategy maintains two "colors," or Worker Deployment Versions simultaneously and can control how traffic is routed between them. This allows you to maximize your uptime with features like instant rollback and ramping. Worker Versioning enables the routing control that blue-green deployments need. - A **rainbow deployment** strategy is like blue-green but with more colors, allowing Workflow Pinning. You can deploy new revisions of your Workflows freely while older versions drain. Using Worker Versioning, Temporal lets you know when all the Workflows of a given version are drained so that you can sunset it. :::note You also have the option to use the [Temporal Worker Controller](/production-deployment/worker-deployments/kubernetes-controller) to automatically enable rainbow deployments of your Workers if you're using Kubernetes. ::: ## Configuring a Worker for Versioning You'll need to add a few additional configuration parameters to your Workers to toggle on Worker Versioning. There are three new parameters, with different names depending on the language: - `UseVersioning`: This enables the Versioning functionality for this Worker. - A `Version` to identify the revision that this Worker will be allowed to execute. This is a combination of a deployment name and a build ID number. - (Optional) The [Default Versioning Behavior](#definition). If unset, you'll be required to specify the behavior on each Workflow. Or you can default to Pinned or Auto-Upgrade. Follow the example for your SDK below: ```go buildID:= mustGetEnv("MY_BUILD_ID") w := worker.New(c, myTaskQueue, worker.Options{ DeploymentOptions: worker.DeploymentOptions{ UseVersioning: true, Version: worker.WorkerDeploymentVersion{ DeploymentName: "llm_srv", BuildId: buildID, }, DefaultVersioningBehavior: workflow.VersioningBehaviorUnspecified, }, }) ``` ```java WorkerOptions.newBuilder() .setDeploymentOptions( WorkerDeploymentOptions.newBuilder() .setVersion(new WorkerDeploymentVersion("llm_srv", "1.0")) .setUseVersioning(true) .setDefaultVersioningBehavior(VersioningBehavior.AUTO_UPGRADE) .build()) .build(); ``` ```python from temporalio.common import WorkerDeploymentVersion, VersioningBehavior from temporalio.worker import Worker, WorkerDeploymentConfig Worker( client, task_queue="mytaskqueue", workflows=workflows, activities=activities, deployment_config=WorkerDeploymentConfig( version=WorkerDeploymentVersion( deployment_name="llm_srv", build_id=my_env.build_id), use_worker_versioning=True, default_versioning_behavior=VersioningBehavior.UNSPECIFIED ), ) ``` ```ts const myWorker = await Worker.create({ workflowsPath: require.resolve('./workflows'), taskQueue, workerDeploymentOptions: { useWorkerVersioning: true, version: { buildId: '1.0', deploymentName: 'llm_srv' }, defaultVersioningBehavior: 'UNSPECIFIED', }, connection: nativeConnection, }); ``` ```csharp var myWorker = new TemporalWorker( Client, new TemporalWorkerOptions(taskQueue) {DeploymentOptions = new(new("llm_srv", "1.0"), true) { DefaultVersioningBehavior = VersioningBehavior.Unspecified }, }.AddWorkflow()); ``` ```ruby worker = Temporalio::Worker.new( client: client, task_queue: task_queue, workflows: [MyWorkflow], deployment_options: Temporalio::Worker::DeploymentOptions.new( version: Temporalio::WorkerDeploymentVersion.new( deployment_name: 'llm_srv', build_id: '1.0' ), use_worker_versioning: true, default_versioning_behavior: Temporalio::VersioningBehavior::UNSPECIFIED ) ) ``` ### Which Default Versioning Behavior should you choose? If you are using blue-green deployments, you should default to Auto-Upgrade and should not use Workflow Pinning. Otherwise, if your Worker and Workflows are new, we suggest not providing a `DefaultVersioningBehavior`. In general, each Workflow Type should be annotated as Auto-Upgrade or Pinned. If all of your Workflows will be short-running for the foreseeable future, you can default to Pinned. Many users who are migrating to Worker Versioning will start by defaulting to Auto-Upgrade until they have had time to annotate their Workflows. This default is the most similar to the legacy behavior. Once each Workflow Type is annotated, you can remove the `DefaultVersioningBehavior`. There is a possibility of a queue blocking limitation for new or Auto-Upgrade Workflows if there is a ramp, but one of the Current or Ramping versions is down or doesn't have enough capacity. This leads to other versions not getting Tasks or slowing down. For example, you have a Current Version and a Ramping Version at 50%. If all of your Current Version Workers go down, you would expect at least 50% of new Workflows to go to the Ramping Version. This won't happen because the Tasks for the Current Version are blocking the queue. :::note Keep in mind that Child Workflows of a parent or previous Auto-Upgrade Workflow default to Auto-Upgrade behavior and not Unspecified. ::: You also want to make sure you understand how your Activities are going to work across different Worker Deployment Versions. Refer to the [Worker Versioning Activitiy behavior docs](/worker-versioning#actvity-behavior-across-versions) for more details. ## Rolling out changes with the CLI Next, deploy your Worker with the additional configuration parameters. Before making any Workflow revisions, you can use the `temporal` CLI to check which of your Worker versions are currently polling: You can view the Versions that are part of a Deployment with `temporal worker deployment describe`: ```bash temporal worker deployment describe --name="$MY_DEPLOYMENT" ``` To activate a Deployment Version, use `temporal worker deployment set-current-version`, specifying the deployment name and a Build ID: ```bash temporal worker deployment set-current-version \ --deployment-name "YourDeploymentName" \ --build-id "YourBuildID" ``` To ramp a Deployment Version up to some percentage of your overall Worker fleet, use `set-ramping version`, with the same parameters and a ramping percentage: ```bash temporal worker deployment set-ramping-version \ --deployment-name "YourDeploymentName" \ --build-id "YourBuildID" \ --percentage=5 ``` You can verify that Workflows are cutting over to that version with `describe -w YourWorkflowID`: ```bash temporal workflow describe -w YourWorkflowID ``` That returns the new Version that the workflow is running on: ``` Versioning Info: Behavior AutoUpgrade Version llm_srv.2.0 OverrideBehavior Unspecified ``` ## Marking a Workflow Type as Pinned You can mark a Workflow Type as pinned when you register it by adding an additional Pinned parameter. This will cause it to remain on its original deployed version: ```go // w is the Worker configured as in the previous example w.RegisterWorkflowWithOptions(HelloWorld, workflow.RegisterOptions{ // or workflow.VersioningBehaviorAutoUpgrade VersioningBehavior: workflow.VersioningBehaviorPinned, }) ``` ```java @WorkflowInterface public interface HelloWorld { @WorkflowMethod String hello(); } public static class HelloWorldImpl implements HelloWorld { @Override @WorkflowVersioningBehavior(VersioningBehavior.PINNED) public String hello() { return "Hello, World!"; } } ``` ```python @workflow.defn(versioning_behavior=VersioningBehavior.PINNED) class HelloWorld: @workflow.run async def run(self): return "hello world!" ``` ```ts setWorkflowOptions({ versioningBehavior: 'PINNED' }, helloWorld); export async function helloWorld(): Promise { return 'hello world!'; } ``` ```csharp [Workflow(VersioningBehavior = VersioningBehavior.Pinned)] public class HelloWorld { [WorkflowRun] public async Task RunAsync() { return "hello world!"; } } ``` ```ruby class HelloWorld < Temporalio::Workflow::Definition workflow_versioning_behavior Temporalio::VersioningBehavior::PINNED def execute 'hello world!' end end ``` ## Moving a pinned Workflow Sometimes you'll need to manually move a set of pinned Workflows off of a version that has a bug to a version with the fix. If you need to move a pinned Workflow to a new version, use `temporal workflow update-options`: ```bash temporal workflow update-options \ --workflow-id "$WORKFLOW_ID" \ --versioning-override-behavior pinned \ --versioning-override-deployment-name "$TARGET_DEPLOYMENT" \ --versioning-override-build-id "$TARGET_BUILD_ID" ``` You can move several Workflows at once matching a `--query` parameter: ```bash temporal workflow update-options \ --query="TemporalWorkerDeploymentVersion=$TARGET_DEPLOYMENT:$BAD_BUILD_ID" \ --versioning-override-behavior pinned \ --versioning-override-deployment-name "$TARGET_DEPLOYMENT" \ --versioning-override-build-id "$FIXED_BUILD_ID" ``` In this scenario, you may also need to use the other [Versioning APIs](/workflow-definition#workflow-versioning) to patch your Workflow in the "fixed" build, so that your target Worker can handle the moved Workflows correctly. If you made a [version-incompatible change](/workflow-definition#deterministic-constraints) to your Workflow, and you want to roll back to an earlier version, it's not possible to patch it. Considering using [Workflow Reset](/workflow-execution/event#reset) along with your move. "Reset-with-Move" allows you to atomically Reset your Workflow and set a Versioning Override on the newly reset Workflow, so when it resumes execution, all new Workflow Tasks will be executed on your new Worker. ```bash temporal workflow reset with-workflow-update-options \ --workflow-id "$WORKFLOW_ID" \ --event-id "$EVENT_ID" \ --reason "$REASON" \ --versioning-override-behavior pinned \ --versioning-override-deployment-name "$TARGET_DEPLOYMENT" \ --versioning-override-build-id "$TARGET_BUILD_ID" ``` ## Migrating a Workflow from Pinned to Auto-Upgrade There may be times when you need to migrate your Workflow from Pinned to Auto-Upgrade because you configured your Workflow Type with the wrong behavior or you've pinned a really long-running Workflow by mistake. Pinned Workflows can block version drainage, especially when they run for a long time. You could move the Workflow to a new build, but that would just push the problem to the next build. In order to make this change, you need to change the versioning behavior for your Workflow from Pinned to Auto-Upgrade. You can use `temporal workflow update-options` for this: ```bash temporal workflow update-options \ --workflow-id "$WORKFLOW_ID" \ --versioning-override-behavior auto_upgrade ``` If you want to move all your Workflows of a certain type to this new configuration, you can do it with this command: ```bash temporal workflow update-options \ --query="WorkflowType='$WORKFLOW_TYPE'" \ --versioning-override-behavior auto_upgrade ``` You can also filter on a certain build ID to limit the number of Workflows you apply it to: ```bash temporal workflow update-options \ --query="WorkflowType='$WORKFLOW_TYPE' AND TemporalWorkerDeploymentVersion='$TARGET_DEPLOYMENT:$OLD_VERSION'" \ --versioning-override-behavior auto_upgrade ``` :::note When you change the behavior to Auto-Upgrade, the Workflow will resume work on the Workflow's Target Version. So if the Workflow's Target Version is different from the earlier Pinned Version, you should make sure you [patch](/patching#patching) the Workflow code. ::: ## Sunsetting an old Deployment Version A Worker Deployment Version moves through the following states: 1. **Inactive**: The version exists because a Worker with that version has polled the server. If this version never becomes Active, it will never be Draining or Drained. 2. **Active**: The version is either Current or Ramping, so it is accepting new Workflows and existing Auto-Upgrade Workflows. 3. **Draining**: The version stopped being Current or Ramping, and it has open pinned Workflows running on it. It is possible to be Draining and have no open pinned Workflows for a short time, since the drainage status is updated periodically. 4. **Drained**: The version was draining and now all the pinned Workflows that were running on it are closed. You can see these statuses when you describe a Worker Deployment in the `WorkerDeploymentVersionStatus` of each `VersionSummary`, or by describing the version directly. When a version is Draining or Drained, that is displayed in a value called `DrainageStatus`. Periodically, the Temporal Service will refresh this status by counting any open pinned Workflows using that version. On each refresh, `DrainageInfo.last_checked_time` is updated. Eventually, `DrainageInfo` will report that the version is fully drained. At this point, no Workflows are still running on that version and no more will be automatically routed to it, so you can consider shutting down the running Workers. You can monitor this by checking `WorkerDeploymentInfo.VersionSummaries` or with `temporal worker deployment describe-version`: ```bash temporal worker deployment describe-version \ --deployment-name "YourDeploymentName" \ --build-id "YourBuildID" ``` ``` Worker Deployment Version: Version llm_srv.1.0 CreateTime 5 hours ago RoutingChangedTime 32 seconds ago RampPercentage 0 DrainageStatus draining DrainageLastChangedTime 31 seconds ago DrainageLastCheckedTime 31 seconds ago Task Queues: Name Type hello-world activity hello-world workflow ``` If you have implemented [Queries](/sending-messages#sending-queries) on closed pinned Workflows, you may need to keep some Workers running to handle them. ### Adding a pre-deployment test Before deploying a new Workflow revision, you can test it with synthetic traffic. To do this, use pinning in your tests, following the examples following ```go workflowOptions := client.StartWorkflowOptions{ ID: "MyWorkflowId", TaskQueue: "MyTaskQueue", VersioningOverride: &client.PinnedVersioningOverride{ Version: worker.WorkerDeploymentVersion{ DeploymentName: "DeployName", BuildId: "1.0", }, }, } // c is an initialized Client we, err := c.ExecuteWorkflow(context.Background(), workflowOptions, HelloWorld, "Hello") ``` ```java MyWorkflow handle = client.newWorkflowStub( MyWorkflow.class, WorkflowOptions.newBuilder() .setWorkflowId("MyWorkflowId") .setTaskQueue("MyTaskQueue") .setVersioningOverride(new VersioningOverride.PinnedVersioningOverride( new WorkerDeploymentVersion("DeployName", "1.0"))) .build() ); WorkflowExecution we = WorkflowClient.start(handle::execute, "Hello"); ``` ```python handle = client.start_workflow( MyWorkflow.run, "Hello", id="MyWorkflowId", task_queue="MyTaskQueue", versioning_override=PinnedVersioningOverride( WorkerDeploymentVersion("DeployName", "1.0") ), ) ``` ```ts const handle = await client.workflow.start('helloWorld', { taskQueue: 'MyTaskQueue', workflowId: 'MyWorkflowId', versioningOverride: { pinnedTo: { buildId: '1.0', deploymentName: 'deploy-name' }, }, }); ``` ```csharp var workerV1 = new WorkerDeploymentVersion("deploy-name", "1.0"); var handle = await Client.StartWorkflowAsync( (HelloWorld wf) => wf.RunAsync(), new(id: "MyWorkflowId", taskQueue: "MyTaskQueue") { VersioningOverride = new VersioningOverride.Pinned(workerV1), } ); ``` ```ruby worker_v1 = Temporalio::WorkerDeploymentVersion.new( deployment_name: 'deploy-name', build_id: '1.0' ) handle = env.client.start_workflow( HelloWorld, id: 'MyWorkflowId', task_queue: 'MyTaskQueue', versioning_override: Temporalio::VersioningOverride.pinned(worker_v1) ) ``` ## Garbage collection Worker Deployments are never garbage collected, but *Worker Deployment Versions* (often referred to as Versions, Worker Versions, Deployment Versions) are. Versions are deleted to keep the total number of versions in one Worker Deployment less than or equal to [`matching.maxVersionsInDeployment`](https://github.com/temporalio/temporal/blob/a3a53266c002ae33b630a41977274f8b5b587031/common/dynamicconfig/constants.go#L1317-L1321), which is currently set to 100 in Temporal Cloud, but that's a conservative number and it could be increased if needed. For example, when you deploy your 101st Worker version in a Worker Deployment, the server looks at the oldest drained version in the Worker deployment. If it has had no pollers in the last 5 minutes, the server deletes it. If that version still has pollers, the server will try the next oldest version. If none of the 100 versions are eligible for deletion (ie. none of them are drained with no pollers), then no version will be deleted and the poll from the 101st version would fail. At that point, to successfully deploy your 101st version, you would need to increase `matching.maxVersionsInDeployment` or stop polling from one of the old drained versions to make it eligible for clean up. If you want to re-deploy a previously deleted version, start polling with a Worker that has the same build ID and Deployment Name as the deleted version and the server will recreate it. This covers the complete lifecycle of working with Worker Versioning. We are continuing to improve this feature, and we welcome any feedback or feature requests using the sidebar link! --- ## Quickstarts Choose your language to get started quickly. --- ## Environment configuration(References) The following table details all available settings, their corresponding environment variables, and their TOML file paths. For more information on using environment variables and configuration files to set up your Temporal Client, refer to the [Environment Configuration](/develop/environment-configuration). | Setting | Environment Variable | TOML Path | Description | | :------------------------ | :--------------------------------------- | :--------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Configuration File Path | `TEMPORAL_CONFIG_FILE` | **NA** | Path to the TOML configuration file | | Server Address | `TEMPORAL_ADDRESS` | `profile..address` | The host and port of the Temporal Frontend service (e.g., "localhost:7233"). | | Namespace | `TEMPORAL_NAMESPACE` | `profile..namespace` | The Temporal Namespace to connect to. | | API Key | `TEMPORAL_API_KEY` | `profile..api_key` | An API key for authentication. If present, TLS is enabled by default. | | Enable/Disable TLS | `TEMPORAL_TLS` | `profile..tls.disabled` | Set to "true" to enable TLS, "false" to disable. In TOML, disabled = true turns TLS off. | | Client Certificate | `TEMPORAL_TLS_CLIENT_CERT_DATA` | `profile..tls.client_cert_data` | The raw PEM data containing the client's public TLS certificate. Alternatively, you can use `TEMPORAL_TLS_CLIENT_CERT_PATH` to provide a path to the certificate or the TOML `profile..tls.client_cert_path`. | | Client Certificate Path | `TEMPORAL_TLS_CLIENT_CERT_PATH` | `profile..tls.client_cert_path` | A filesystem path to the client's public TLS certificate. Alternatively, you can provide the raw PEM data using `TEMPORAL_TLS_CLIENT_CERT_DATA` or the TOML `profile..tls.client_cert_data`. | | Client Key | `TEMPORAL_TLS_CLIENT_KEY_DATA` | `profile..tls.client_key_data` | The raw PEM data containing the client's private TLS key. Alternatively, you can use `TEMPORAL_TLS_CLIENT_KEY_PATH` to provide a path to the key or the TOML `profile..tls.client_key_path`. | | Client Key Path | `TEMPORAL_TLS_CLIENT_KEY_PATH` | `profile..tls.client_key_path` | A filesystem path to the client's private TLS key. Alternatively, you can provide the raw PEM data using `TEMPORAL_TLS_CLIENT_KEY_DATA` or the TOML `profile..tls.client_key_data`. | | Server CA Cert | `TEMPORAL_TLS_SERVER_CA_CERT_DATA` | `profile..tls.server_ca_cert_data` | The raw PEM data for the Certificate Authority certificate used to verify the server. Alternatively, you can use `TEMPORAL_TLS_SERVER_CA_CERT_PATH` to provide a path or the TOML `profile..tls.server_ca_cert_path`. | | Server CA Cert Path | `TEMPORAL_TLS_SERVER_CA_CERT_PATH` | `profile..tls.server_ca_cert_path` | A filesystem path to the Certificate Authority certificate. Alternatively, you can provide the raw PEM data using `TEMPORAL_TLS_SERVER_CA_CERT_DATA` or the TOML `profile..tls.server_ca_cert_data`. | | TLS Server Name | `TEMPORAL_TLS_SERVER_NAME` | `profile..tls.server_name` | Overrides the server name used for Server Name Indication (SNI) in the TLS handshake. | | Disable Host Verification | `TEMPORAL_TLS_DISABLE_HOST_VERIFICATION` | `profile..tls.disable_host_verification` | A boolean to disable server hostname verification. Use with caution. Not supported by all SDKs. | | Codec Endpoint | `TEMPORAL_CODEC_ENDPOINT` | `profile..codec.endpoint` | The endpoint for a remote Data Converter. This is not supported by all SDKs. SDKs that support this configuration don't apply it by default. Intended mostly for CLI use. | | Codec Auth | `TEMPORAL_CODEC_AUTH` | `profile..codec.auth` | The authorization header value for the remote data converter. | | gRPC Metadata | `TEMPORAL_GRPC_META_*` | `profile..grpc_meta` | Sets gRPC headers. The part after `_META_` becomes the header key (e.g., `_SOME_KEY` -> `some-key`). | --- ## OSS Temporal Service metrics reference :::info OSS Temporal Service metrics The information on this page is relevant to open source [Temporal Service deployments](/temporal-service). See [Cloud metrics](/cloud/metrics/) for metrics emitted by [Temporal Cloud](/cloud/overview). See [SDK metrics](/references/sdk-metrics) for metrics emitted by the [SDKs](/encyclopedia/temporal-sdks). ::: A Temporal Service emits a range of metrics to help operators get visibility into the Temporal Service's performance and to set up alerts. All metrics emitted by the Temporal Service are listed in [metric_defs.go](https://github.com/temporalio/temporal/blob/main/common/metrics/metric_defs.go). For details on setting up metrics in your Temporal Service configuration, see the [Temporal Service configuration reference](/references/configuration#global). The [dashboards repository](https://github.com/temporalio/dashboards) contains community-driven Grafana dashboard templates that can be used as a starting point for monitoring the Temporal Service and SDK metrics. You can use these templates as references to build your own dashboards. For any metrics that are missing in the dashboards, use [metric_defs.go](https://github.com/temporalio/temporal/blob/main/common/metrics/metric_defs.go) as a reference. Note that, apart from these metrics emitted by the Temporal Service, you should also monitor infrastructure-specific metrics like CPU, memory, and network for all hosts that are running Temporal Service services. ## Common metrics Temporal emits metrics for each gRPC service request. These metrics are emitted with `type`, `operation`, and `namespace` tags, which provide visibility into Service usage and show the request rates across Services, Namespaces, and Operations. - Use the `operation` tag in your query to get request rates, error rates, or latencies per operation. - Use the `service_name` tag with the [service role tag values](https://github.com/temporalio/temporal/blob/bba148cf1e1642fd39fa0174423b183d5fc62d95/common/metrics/defs.go#L108) to get details for the specific service. All common tags that you can add in your query are defined in the [metric_defs.go](https://github.com/temporalio/temporal/blob/main/common/metrics/metric_defs.go) file. For example, to see service requests by operation on the Frontend Service, use the following: `sum by (operation) (rate(service_requests{service_name="frontend"}[2m]))` Note: All metrics queries in this topic are [Prometheus queries](https://prometheus.io/docs/prometheus/latest/querying/basics/). The following list describes some metrics you can get started with. ### `service_requests` Shows service requests received per Task Queue. Example: Service requests by operation `sum(rate(service_requests{operation=\"AddWorkflowTask\"}[2m]))` ### `service_latency` Shows latencies for all Client request operations. Usually these are the starting point to investigate which operation is experiencing high-latency issues. Example: P95 service latency by operation for the Frontend Service `histogram_quantile(0.95, sum(rate(service_latency_bucket{service_name="frontend"}[5m])) by (operation, le))` ### `service_error_with_type` (Available only in v1.17.0+) Identifies errors encountered by the service. Example: Service errors by type for the Frontend Service `sum(rate(service_error_with_type{service_name="frontend"}[5m])) by (error_type)` ### `client_errors` An indicator for connection issues between different Server roles. Example: Client errors `sum(rate(client_errors{service_name="frontend",service_role="history"}[5m]))` In addition to these, you can define some service-specific metrics to get performance details for each service. Start with the following list, and use [metric_defs.go](https://github.com/temporalio/temporal/blob/main/common/metrics/metric_defs.go) to define additional metrics as required. ## Matching Service metrics ### `poll_success` Shows for Tasks that are successfully matched to a poller. Example: `sum(rate(poll_success{}[5m]))` ### `poll_timeouts` Shows when no Tasks are available for the poller within the poll timeout. Example: `sum(rate(poll_timeouts{}[5m]))` ### `asyncmatch_latency` Measures the time from creation to delivery for async matched Tasks. The larger this latency, the longer Tasks are sitting in the queue waiting for your Workers to pick them up. Example: `histogram_quantile(0.95, sum(rate(asyncmatch_latency_bucket{service_name="matching"}[5m])) by (operation, le))` ### `no_poller_tasks` Emitted whenever a task is added to a task queue that has no poller, and is a counter metric. This is usually an indicator that either the Worker or the starter programs are using the wrong Task Queue. ## History Service metrics A History Task is an internal Task in Temporal that is created as part of a transaction to update Workflow state and is processed by the Temporal History service. It is critical to ensure that the History Task processing system is healthy. The following key metrics can be used to monitor the History Service health: ### `task_requests` Emitted on every Task process request. Example: `sum(rate(task_requests{operation=~"TransferActive.*"}[1m]))` ### `task_errors` Emitted on every Task process error. Example: `sum(rate(task_errors{operation=~"TransferActive.*"}[1m]))` ### `task_attempt` Number of attempts on each Task Execution. A Task is retried forever, and each retry increases the attempt count. Example: `histogram_quantile(0.95, sum(rate(task_attempt_bucket{operation=~"TransferActive.*"}[1m])) by (operation, le))` ### `task_latency_processing` Shows the processing latency per attempt. Example: `histogram_quantile(0.95, sum(rate(task_latency_processing_bucket{operation=~"TransferActive.*",service_name="history"}[1m])) by (operation, le))` ### `task_latency` Measures the in-memory latency across multiple attempts. ### `task_latency_queue` Measures the duration, end-to-end, from when the Task should be executed (from the time it was fired) to when the Task is done. ### `task_latency_load` (Available only in v1.18.0+) Measures the duration from Task generation to Task loading (Task schedule to start latency for persistence queue). ### `task_latency_schedule` (Available only in v1.18.0+) Measures the duration from Task submission (to the Task scheduler) to processing (Task schedule to start latency for in-memory queue). ### `queue_latency_schedule` (Available only in v1.18.0+) Measures the time to schedule 100 Tasks in one Task channel in the host-level Task scheduler. If fewer than 100 Tasks are in the Task channel for 30 seconds, the latency is scaled to 100 Tasks upon emission. Note: This is still an experimental metric and is subject to change. ### `service_latency_userlatency` Shows the latency introduced because of Workflow logic. For example, if you have one Workflow scheduling many Activities or Child Workflows at the same time, it can cause a per-Workflow lock contention. The wait period for the per-Workflow lock is counted as `userlatency`. The `operation` tag contains details about Task type and Active versus Standby statuses, and can be used to get request rates, error rates, or latencies per operation, which can help identify issues caused by database problems. ## Persistence metrics Temporal Server emits metrics for every persistence database read and write. Some of the most important ones are the following: ### `persistence_requests` Emitted on every persistence request. Examples: - Prometheus query for getting the total number of persistence requests by operation for the History Service: `sum by (operation) (rate(persistence_requests{service_name="history"}[1m]))` - Prometheus query for getting the total number of persistence requests by operation for the Matching Service: `sum by (operation) (rate(persistence_requests{service_name="matching"}[1m]))` ### `persistence_errors` Shows all persistence errors. This metric is a good indicator for connection issues between the Temporal Service and the persistence store. Example: - Prometheus query for getting all persistence errors by service (history) `sum (rate(persistence_errors{service_name="history"}[1m]))` ### `persistence_error_with_type` Shows all errors related to the persistence store with type, and contain an `error_type` tag. - Prometheus query for getting persistence errors with type by (history) and by error type: `sum(rate(persistence_error_with_type{service_name="history"}[1m])) by (error_type)` ### `persistence_latency` Shows the latency on persistence operations. Example: - Prometheus query for getting latency by percentile: `histogram_quantile(0.95, sum(rate(persistence_latency_bucket{service_name="history"}[1m])) by (operation, le))` ## Schedule metrics Temporal emits metrics that track the performance and outcomes of these Scheduled Executions. Below are additional metrics that can help you monitor and optimize your Scheduled Workflow Executions. ### `schedule_buffer_overruns` Indicates instances where the buffer for holding Scheduled Workflows exceeds its maximum capacity. This scenario typically occurs when schedules with a `buffer_all` overlap policy have their average run length exceeding the average schedule interval. Example: To monitor buffer overruns. `sum(rate(schedule_buffer_overruns{namespace="$namespace"}[5m]))` ### `schedule_missed_catchup_window` Tracks occurrences when the system fails to execute a Scheduled Action within the defined catchup window. Missed catchup windows can result from extended outages beyond the configured catchup period. Example: To identify missed catchup opportunities. `sum(rate(schedule_missed_catchup_window{namespace="$namespace"}[5m]))` ### `schedule_rate_limited` Reflects instances where the creation of Workflows by a Schedule is throttled due to rate limiting policies within a Namespace. This metric is crucial for identifying scheduling patterns that frequently hit rate limits, potentially causing missed catchup windows. Example: To assess the impact of rate limiting on Scheduled Executions. `sum(rate(schedule_rate_limited{namespace="$namespace"}[5m]))` ### `schedule_action_success` Measures the successful execution of Workflows as per their schedules or through manual triggers. This metric is confirms that Workflows are running as expected without delays or errors. Example: To track the success rate of Scheduled Workflow Executions. `sum(rate(schedule_action_success{namespace="$namespace"}[5m]))` ## Workflow metrics These metrics pertain to Workflow statistics. ### `workflow_cancel` Number of Workflows canceled before completing execution. ### `workflow_continued_as_new` Number of Workflow Executions that were Continued-As-New from a past execution. ### `workflow_failed` Number of Workflows that failed before completion. ### `workflow_success` Number of Workflows that successfully completed. ### `workflow_timeout` Number of Workflows that timed out before completing execution. ## Nexus metrics These metrics pertain to Nexus Operations. ### Nexus Machinery in the History Service See [architecture document](https://github.com/temporalio/temporal/blob/5d55d6c707bd68d8f3274c57ae702331adf05e6e/docs/architecture/nexus.md#scheduler) for more info. #### In-Memory Buffer `dynamic_worker_pool_scheduler_enqueued_tasks`: A counter that is incremented when a task is enqueued to the buffer. `dynamic_worker_pool_scheduler_dequeued_tasks`: A counter that is incremented when a task is dequeued from the buffer. `dynamic_worker_pool_scheduler_rejected_tasks`: A counter that is incremented when the buffer is full and adding the task is rejected. `dynamic_worker_pool_scheduler_buffer_size`: A gauge that periodically samples the size of the buffer. ### Concurrency Limiter `dynamic_worker_pool_scheduler_active_workers`: A gauge that periodically samples the number of running goroutines. #### Rate Limiter `rate_limited_task_runnable_wait_time`: A histogram representing the time a task spends waiting for the rate limiter. #### Circuit Breaker `circuit_breaker_executable_blocked`: A counter that is incremented every time a task execution is blocked by the circuit breaker. #### Task Executors `nexus_outbound_requests`: A counter representing the number of Nexus outbound requests made by the history service. `nexus_outbound_latency`: A histogram representing the latency of outbound Nexus requests made by the history service. `callback_outbound_requests`: A counter representing the number of callback outbound requests made by the history service. `callback_outbound_latency`: A histogram representing the latency histogram of outbound callback requests made by the history service. ### Nexus Machinery on the Frontend Service #### `nexus_requests` The number of Nexus requests received by the service. Type: Counter #### `nexus_latency` Latency of Nexus requests. Type: Histogram #### `nexus_request_preprocess_errors` The number of Nexus requests for which pre-processing failed. Type: Counter #### `nexus_completion_requests` The number of Nexus completion (callback) requests received by the service. Type: Counter #### `nexus_completion_latency` Latency histogram of Nexus completion (callback) requests. Type: Histogram #### `nexus_completion_request_preprocess_errors` The number of Nexus completion requests for which pre-processing failed. Type: Counter --- ## Temporal Commands reference A [Command](/workflow-execution#command) is a requested action issued by a [Worker](/workers#worker) to the [Temporal Service](/temporal-service) after a [Workflow Task Execution](/tasks#workflow-task-execution) completes. The following is a complete list of possible Commands. ### CompleteWorkflowExecution This Command is triggered when the Workflow Function Execution returns. It indicates to the Temporal Service that the [Workflow Execution](/workflow-execution) is complete. The corresponding [Event](/workflow-execution/event#event) for this Command is one of the few Events that will be the last in a Workflow Execution [Event History](/workflow-execution/event#event-history). - Awaitable: No, a Workflow Execution can not await on the action resulting from this Command. - Corresponding Event: [WorkflowExecutionCompleted](/references/events#workflowexecutioncompleted) ### ContinueAsNewWorkflowExecution This Command is triggered when there is a call to [Continue-As-New](/workflow-execution/continue-as-new) from within the [Workflow](/workflows). The corresponding Event for this Command is one of the few Events that will be the last in a Workflow Execution Event History. - Awaitable: No, a Workflow Execution can not await on the action resulting from this Command. - Corresponding Event: [WorkflowExecutionContinuedAsNew](/references/events#workflowexecutioncontinuedasnew) ### FailWorkflowExecution This Command is triggered when the Workflow Execution returns an error or an exception is thrown. - Awaitable: No, a Workflow Execution can not await on the action resulting from this Command. - Corresponding Event: [WorkflowExecutionFailed](/references/events#workflowexecutionfailed) ### CancelWorkflowExecution This Command is triggered when the Workflow has successfully cleaned up after receiving a Cancellation Request (which will be present as [WorkflowExecutionCancelRequestedEvent](/references/events#workflowexecutioncancelrequested) in the Event History). The Corresponding Event for this Command is one of the few Events that will be the last in a Workflow Execution Event History. - Awaitable: No, a Workflow Execution can not await on the action resulting from this Command. - Corresponding Event: [WorkflowExecutionCanceled](/references/events#workflowexecutioncanceled) ### StartChildWorkflowExecution This Command is triggered by a call to spawn a [Child Workflow Execution](/child-workflows). - Awaitable: Yes, a Workflow Execution can await on the action resulting from this Command. - Corresponding Event: [ChildWorkflowExecutionStarted](/references/events#childworkflowexecutionstarted) By default, you cannot have more than 2,000 pending Child Workflows. ### SignalExternalWorkflowExecution This Command is triggered by a call to [Signal](/sending-messages#sending-signals) another Workflow Execution. - Awaitable: Yes, a Workflow Execution can await on the action resulting from this Command. - Corresponding Event: [SignalExternalWorkflowExecutionInitiated](/references/events#signalexternalworkflowexecutioninitiated) By default, you cannot have more than 2,000 pending Signals to other Workflows. ### RequestCancelExternalWorkflowExecution This Command is triggered by a call to request cancellation of another Workflow Execution. - Awaitable: Yes, a Workflow Execution can await on the action resulting from this Command. - Corresponding Event: [RequestCancelExternalWorkflowExecutionInitiated](/references/events#requestcancelexternalworkflowexecutioninitiated) By default, you cannot have more than 2,000 pending Signals to other Workflows. ### ScheduleActivityTask This Command is triggered by a call to execute an [Activity](/activities). - Awaitable: Yes, a Workflow Execution can await on the action resulting from this Command. - Corresponding Event: [ActivityTaskScheduled](/references/events#activitytaskscheduled) By default, you cannot schedule more than 2,000 Activities concurrently. ### RequestCancelActivityTask This Command is triggered by a call to request the cancellation of an [Activity Task](/tasks#activity-task). - Awaitable: No, a Workflow Execution can not await on the action resulting from this Command. - Corresponding Event: [ActivityTaskCancelRequested](/references/events#activitytaskcancelrequested) ### StartTimer This Command is triggered by a call to start a Timer. - Awaitable: Yes, a Workflow Execution can await on the action resulting from this Command. - Corresponding Event: [TimerStarted](/references/events#timerstarted) ### CancelTimer This Command is triggered by a call to cancel a Timer. - Awaitable: No, a Workflow Execution can not await on the action resulting from this Command. - Corresponding Event: [TimerCanceled](/references/events#timercanceled) ### RecordMarker This Command is triggered by the SDK. - Awaitable: No, a Workflow Execution can not await on the action resulting from this Command. - Corresponding Event: [MarkerRecorded](/references/events#markerrecorded) ### UpsertWorkflowSearchAttributes This Command is triggered by a call to "upsert" Workflow [Search Attributes](/search-attribute). - Awaitable: No, a Workflow Execution can not await on the action resulting from this Command. - Corresponding Event: [UpsertWorkflowSearchAttributes](/references/events#upsertworkflowsearchattributes) ### ProtocolMessageCommand This Command helps guarantee ordering constraints for features such as Updates. This Command points at the message from which the Event is created. Therefore, just from the Command, you can't predict the resulting Event type. ### ScheduleNexusOperation This Command is triggered by a call to execute an Nexus Operation in the caller Workflow. - Awaitable: Yes, a Workflow Execution can await on the action resulting from this Command. - Corresponding Event: [NexusOperationScheduled](/references/events#nexusoperationscheduled) By default, you can't schedule more than 30 Nexus Operations concurrently, see [Limits](/workflow-execution/limits#workflow-execution-nexus-operation-limits) for details. ### CancelNexusOperation This Command is triggered by a call to request the cancellation of a Nexus Operation. - Awaitable: No, a Workflow Execution can not await on the action resulting from this Command. - Corresponding Event: [NexusOperationCancelRequested](/references/events#nexusoperationcancelrequested) --- ## Temporal Cluster configuration reference Much of the behavior of a Temporal Cluster is configured using the `development.yaml` file and may contain the following top-level sections: - [`global`](#global) - [`persistence`](#persistence) - [`log`](#log) - [`clusterMetadata`](#clustermetadata) - [`services`](#services) - [`publicClient`](#publicclient) - [`archival`](#archival) - [`namespaceDefaults`](#namespacedefaults) - [`dcRedirectionPolicy`](#dcredirectionpolicy) - [`dynamicConfigClient`](#dynamicconfigclient) Changing any properties in the `development.yaml` file requires a process restart for changes to take effect. Configuration parsing code is available [here](https://github.com/temporalio/temporal/blob/main/common/config/config.go). ## global The `global` section contains process-wide configuration. See below for a minimal configuration (optional parameters are commented out.) ```yaml global: membership: broadcastAddress: '127.0.0.1' metrics: prometheus: framework: 'tally' listenAddress: '127.0.0.1:8000' ``` ### membership The `membership` section controls the following membership layer parameters. #### maxJoinDuration The amount of time the service will attempt to join the gossip layer before failing. Default is 10s. #### broadcastAddress Used by gossip protocol to communicate with other hosts in the same Cluster for membership info. Use IP address that is reachable by other hosts in the same Cluster. If there is only one host in the Cluster, you can use 127.0.0.1. Check `net.ParseIP` for supported syntax, only IPv4 is supported. ### metrics Configures the Cluster's metric subsystem. Specific provides are configured using provider names as the keys. - [`statsd`](#statsd) - `prometheus` - `m3` #### prefix The prefix to be applied to all outgoing metrics. #### tags The set of key-value pairs to be reported as part of every metric. #### excludeTags A map from tag name string to tag values string list. This is useful to exclude some tags that might have unbounded cardinality. The value string list can be used to whitelist values of that excluded tag to continue to be included. For example, if you want to exclude `task_queue` because it has unbounded cardinality, but you still want to see a whitelisted value for `task_queue`. #### statsd :::caution `statsd` is not supported natively by Temporal. ::: The `statsd` sections supports the following settings: - `hostPort`: The host:port of the statsd server. - `prefix`: Specific prefix in reporting to `statsd`. - `flushInterval`: Maximum interval for sending packets. (_Default_ 300ms). - `flushBytes`: Specifies the maximum UDP packet size you wish to send. (_Default_ 1432 bytes). #### prometheus The `prometheus` sections supports the following settings: - `framework`: The framework to use, currently supports `opentelemetry` and `tally`, default is `tally`. We plan to switch default to `opentelemetry` once its API become stable. - `listenAddress`: Address for Prometheus to scrape metrics from. The Temporal Server uses the Prometheus client API, and the `listenAddress` configuration is used to listen for metrics. - `handlerPath`: Metrics handler path for scraper; default is `/metrics`. #### m3 The `m3` sections supports the following settings: - `hostPort`: The host:port of the M3 server. - `service`: The service tag to that this client emits. - `queue`: M3 reporter queue size, default is 4k. - `packetSize`: M3 reporter max packet size, default is 32k. ### pprof - `port`: If specified, this will initialize pprof upon process start on the listed port. ### tls The `tls` section controls the SSL/TLS settings for network communication and contains two subsections, `internode` and `frontend`. The `internode` section governs internal service communication among roles where the `frontend` governs SDK client communication to the Frontend Service role. Each of these subsections contain a `server` section and a `client` section. The `server` contains the following parameters: - `certFile`: The path to the file containing the PEM-encoded public key of the certificate to use. - `keyFile`: The path to the file containing the PEM-encoded private key of the certificate to use. - `requireClientAuth`: _boolean_ - Requires clients to authenticate with a certificate when connecting, otherwise known as mutual TLS. - `clientCaFiles`: A list of paths to files containing the PEM-encoded public key of the Certificate Authorities you wish to trust for client authentication. This value is ignored if `requireClientAuth` is not enabled. :::tip See the [server samples repo](https://github.com/temporalio/samples-server/tree/master/tls) for sample TLS configurations. ::: Below is an example enabling Server TLS (https) between SDKs and the Frontend APIs: ```yaml global: tls: frontend: server: certFile: /path/to/cert/file keyFile: /path/to/key/file client: serverName: dnsSanInFrontendCertificate ``` Note, the `client` section generally needs to be provided to specify an expected DNS SubjectName contained in the presented server certificate via the `serverName` field; this is needed as Temporal uses IP to IP communication. You can avoid specifying this if your server certificates contain the appropriate IP Subject Alternative Names. Additionally, the `rootCaFiles` field needs to be provided when the client's host does not trust the Root CA used by the server. The example below extends the above example to manually specify the Root CA used by the Frontend Services: ```yaml global: tls: frontend: server: certFile: /path/to/cert/file keyFile: /path/to/key/file client: serverName: dnsSanInFrontendCertificate rootCaFiles: - /path/to/frontend/server/CA/files ``` Below is an additional example of a fully secured cluster using mutual TLS for both frontend and internode communication with manually specified CAs: ```yaml global: tls: internode: server: certFile: /path/to/internode/cert/file keyFile: /path/to/internode/key/file requireClientAuth: true clientCaFiles: - /path/to/internode/serverCa client: serverName: dnsSanInInternodeCertificate rootCaFiles: - /path/to/internode/serverCa frontend: server: certFile: /path/to/frontend/cert/file keyFile: /path/to/frontend/key/file requireClientAuth: true clientCaFiles: - /path/to/internode/serverCa - /path/to/sdkClientPool1/ca - /path/to/sdkClientPool2/ca client: serverName: dnsSanInFrontendCertificate rootCaFiles: - /path/to/frontend/serverCa ``` **Note:** In the case that client authentication is enabled, the `internode.server` certificate is used as the client certificate among services. This adds the following requirements: - The `internode.server` certificate must be specified on all roles, even for a frontend-only configuration. - Internode server certificates must be minted with either **no** Extended Key Usages or **both** ServerAuth and ClientAuth EKUs. - If your Certificate Authorities are untrusted, such as in the previous example, the internode server Ca will need to be specified in the following places: - `internode.server.clientCaFiles` - `internode.client.rootCaFiles` - `frontend.server.clientCaFiles` ## persistence The `persistence` section holds configuration for the data store/persistence layer. The following example shows a minimal specification for a password-secured Cluster using Cassandra. ```yaml persistence: defaultStore: default visibilityStore: cass-visibility # The primary Visibility store. secondaryVisibilityStore: es-visibility # A secondary Visibility store added to enable Dual Visibility. numHistoryShards: 512 datastores: default: cassandra: hosts: '127.0.0.1' keyspace: 'temporal' user: 'username' password: 'password' cass-visibility: cassandra: hosts: '127.0.0.1' keyspace: 'temporal_visibility' es-visibility: elasticsearch: version: 'v7' logLevel: 'error' url: scheme: 'http' host: '127.0.0.1:9200' indices: visibility: temporal_visibility_v1_dev closeIdleConnectionsInterval: 15s ``` The following top level configuration items are required: ### numHistoryShards _Required_ - The number of history shards to create when initializing the Cluster. **Warning:** This value is immutable and will be ignored after the first run. Please ensure you set this value appropriately high enough to scale with the worst case peak load for this Cluster. ### defaultStore _Required_ - The name of the data store definition that should be used by the Temporal server. ### visibilityStore _Required_ - The name of the primary data store definition that should be used to set up [Visibility](/temporal-service/visibility) on the Temporal Cluster. ### secondaryVisibilityStore _Optional_ - The name of the secondary data store definition that should be used to set up [Dual Visibility](/dual-visibility) on the Temporal Cluster. ### datastores _Required_ - contains named data store definitions to be referenced. Each definition is defined with a heading declaring a name (ie: `default:` and `visibility:` above), which contains a data store definition. Data store definitions must be either `cassandra` or `sql`. #### cassandra A `cassandra` data store definition can contain the following values: - `hosts`: _Required_ - "," separated Cassandra endpoints, e.g. "192.168.1.2,192.168.1.3,192.168.1.4". - `port`: Default: 9042 - Cassandra port used for connection by `gocql` client. - `user`: Cassandra username used for authentication by `gocql` client. - `password`: Cassandra password used for authentication by `gocql` client. - `keyspace`: _Required_ - the Cassandra keyspace. - `datacenter`: The data center filter arg for Cassandra. - `maxConns`: The max number of connections to this data store for a single TLS configuration. - `tls`: See TLS below. #### sql A `sql` data store definition can contain the following values: - `user`: Username used for authentication. - `password`: Password used for authentication. - `pluginName`: _Required_ - SQL database type. - _Valid values_: `mysql` or `postgres`. - `databaseName` - _required_ - the name of SQL database to connect to. - `connectAddr` - _required_ - the remote address of the database, e.g. "192.168.1.2". - `connectProtocol` - _required_ - the protocol that goes with the `connectAddr` - _Valid values_: `tcp` or `unix` - `connectAttributes` - a map of key-value attributes to be sent as part of connect `data_source_name` url. - `maxConns` - the max number of connections to this data store. - `maxIdleConns` - the max number of idle connections to this data store - `maxConnLifetime` - is the maximum time a connection can be alive. - `tls` - See below. #### tls The `tls` and `mtls` sections can contain the following values: - `enabled` - _boolean_. - `serverName` - name of the server hosting the data store. - `certFile` - path to the cert file. - `keyFile` - path to the key file. - `caFile` - path to the ca file. - `enableHostVerification` - _boolean_ - `true` to verify the hostname and server cert (like a wildcard for Cassandra cluster). This option is basically the inverse of `InSecureSkipVerify`. See `InSecureSkipVerify` in http://golang.org/pkg/crypto/tls/ for more info. Note: `certFile` and `keyFile` are optional depending on server config, but both fields must be omitted to avoid using a client certificate. ## log The `log` section is optional and contains the following possible values: - `stdout` - _boolean_ - `true` if the output needs to go to standard out. - `level` - sets the logging level. - _Valid values_ - debug, info, warn, error or fatal, default to info. - `outputFile` - path to output log file. ## clusterMetadata `clusterMetadata` contains the local cluster information. The information is used in [Multi-Cluster Replication](/temporal-service/multi-cluster-replication). An example `clusterMetadata` section: ```yaml clusterMetadata: enableGlobalNamespace: true failoverVersionIncrement: 10 masterClusterName: 'active' currentClusterName: 'active' clusterInformation: active: enabled: true initialFailoverVersion: 0 rpcAddress: '127.0.0.1:7233' #replicationConsumer: #type: kafka ``` - `currentClusterName` - _required_ - the name of the current cluster. **Warning:** This value is immutable and will be ignored after the first run. - `enableGlobalNamespace` - _Default:_ `false`. - `replicationConsumer` - determines which method to use to consume replication tasks. The type may be either `kafka` or `rpc`. - `failoverVersionIncrement` - the increment of each cluster version when failover happens. - `masterClusterName` - the master cluster name, only the master cluster can register/update namespace. All clusters can do namespace failover. - `clusterInformation` - contains the local cluster name to `ClusterInformation` definition. The local cluster name should be consistent with `currentClusterName`. `ClusterInformation` sections consist of: - `enabled` - _boolean_ - whether a remote cluster is enabled for replication. - `initialFailoverVersion` - `rpcAddress` - indicate the remote service address (host:port). Host can be DNS name. Use `dns:///` prefix to enable round-robin between IP address for DNS name. ## services The `services` section contains configuration keyed by service role type. There are four supported service roles: - `frontend` - `matching` - `worker` - `history` Below is a minimal example of a `frontend` service definition under `services`: ```yaml services: frontend: rpc: grpcPort: 8233 membershipPort: 8933 bindOnIP: '0.0.0.0' ``` There are two sections defined under each service heading: ### rpc _Required_ `rpc` contains settings related to the way a service interacts with other services. The following values are supported: - `grpcPort`: Port on which gRPC will listen. - `membershipPort`: Port used to communicate with other hosts in the same Cluster for membership info. Each service should use different port. If there are multiple Temporal Clusters in your environment (Kubernetes for example), and they have network access to each other, each Cluster should use a different membership port. - `bindOnLocalHost`: Determines whether uses `127.0.0.1` as the listener address. - `bindOnIP`: Used to bind service on specific IP, or `0.0.0.0`. Check `net.ParseIP` for supported syntax, only IPv4 is supported, mutually exclusive with `BindOnLocalHost` option. **Note:** Port values are currently expected to be consistent among role types across all hosts. ## publicClient The `publicClient` a required section describing the configuration needed to for worker to connect to Temporal server for background server maintenance. - `hostPort` IPv4 host port or DNS name to reach Temporal frontend, [reference](https://github.com/grpc/grpc/blob/master/doc/naming.md) Example: ```yaml publicClient: hostPort: 'localhost:8933' ``` Use `dns:///` prefix to enable round-robin between IP address for DNS name. ## archival _Optional_ Archival is an optional configuration needed to set up the [Archival store](/temporal-service/archival). It can be enabled on `history` and `visibility` data. The following list describes supported values for each configuration on the `history` and `visibility` data. - `state`: State for Archival setting. Supported values are `enabled`, `disabled`. This value must be `enabled` to use Archival with any Namespace in your Cluster. - `enabled`: Enables Archival in your Cluster setup. When set to `enabled`, `URI` and `namespaceDefaults` values must be provided. - `disabled`: Disables Archival in your Cluster setup. When set to `disabled`, the `enableRead` value must be set to `false`, and under `namespaceDefaults`, `state` must be set to `disabled`, with no values set for `provider` and `URI` fields. - `enableRead`: Supported values are `true` or `false`. Set to `true` to allow read operations from the archived Event History data. - `provider`: Location where data should be archived. Subprovider configs are `filestore`, `gstorage`, `s3`, or `your_custom_provider`. Default configuration specifies `filestore`. Example: - To enable Archival in your Cluster configuration: ```yaml # Cluster-level Archival config enabled archival: # Event History configuration history: # Archival is enabled for the History Service data. state: 'enabled' enableRead: true # Namespaces can use either the local filestore provider or the Google Cloud provider. provider: filestore: fileMode: '0666' dirMode: '0766' gstorage: credentialsPath: '/tmp/gcloud/keyfile.json' # Configuration for archiving Visibility data. visibility: # Archival is enabled for Visibility data. state: 'enabled' enableRead: true provider: filestore: fileMode: '0666' dirMode: '0766' ``` - To disable Archival in your Cluster configuration: ```yaml # Cluster-level Archival config disabled archival: history: state: 'disabled' enableRead: false visibility: state: 'disabled' enableRead: false namespaceDefaults: archival: history: state: 'disabled' visibility: state: 'disabled' ``` For more details on Archival setup, see [Set up Archival](/self-hosted-guide/archival#set-up-archival). ## namespaceDefaults _Optional_ Sets default Archival configuration for each Namespace using `namespaceDefaults` for `history` and `visibility` data. - `state`: Default state of the Archival for the Namespace. Supported values are `enabled` or `disabled`. - `URI`: Default URI for the Namespace. For more details on setting Namespace defaults on Archival, see [Namespace creation in Archival setup](/self-hosted-guide/archival#namespace-creation) Example: ```yaml --- # Default values for a Namespace if none are provided at creation. namespaceDefaults: # Archival defaults. archival: # Event History defaults. history: state: 'enabled' # New Namespaces will default to the local provider. URI: 'file:///tmp/temporal_archival/development' visibility: state: 'disabled' URI: 'file:///tmp/temporal_vis_archival/development' ``` ## dcRedirectionPolicy _Optional_ Contains the Frontend datacenter API redirection policy that you can use for cross-DC replication. Supported values: - `policy`: Supported values are `noop`, `selected-apis-forwarding`, and `all-apis-forwarding`. - `noop`: Not setting a value or setting `noop` means no redirection. This is the default value. - `selected-apis-forwarding`: Sets up forwarding for the following APIs to the active Cluster based on the Namespace. - `StartWorkflowExecution` - `SignalWithStartWorkflowExecution` - `SignalWorkflowExecution` - `RequestCancelWorkflowExecution` - `TerminateWorkflowExecution` - `QueryWorkflow` - `all-apis-forwarding`: Sets up forwarding for all APIs on the Namespace in the active Cluster. Example: ```yaml #... dcRedirectionPolicy: policy: 'selected-apis-forwarding' #... ``` ## dynamicConfigClient _Optional_ Configuration for setting up file-based [dynamic configuration](/temporal-service/configuration#dynamic-configuration) client for the Cluster. This setting is required if specifying dynamic configuration. Supported configuration values are as follows: - `filepath`: Specifies the path where the dynamic configuration YAML file is stored. The path should be relative to the root directory. - `pollInterval`: Interval between the file-based client polls to check for dynamic configuration updates. The minimum period you can set is 5 seconds. Example: ```yaml dynamicConfigClient: filepath: 'config/dynamicconfig/development-cass.yaml' pollInterval: '10s' ``` --- ## Temporal Cluster dynamic configuration reference Temporal Cluster provides [dynamic configuration](/temporal-service/configuration#dynamic-configuration) keys that you can update and apply to a running Cluster without restarting your services. The dynamic configuration keys are set with default values when you create your Cluster configuration. You can override these values as you test your Cluster setup for optimal performance according to your workload requirements. For the complete list of dynamic configuration keys, see [https://github.com/temporalio/temporal/blob/main/common/dynamicconfig/constants.go](https://github.com/temporalio/temporal/blob/main/common/dynamicconfig/constants.go). Ensure that you check server release notes for any changes to these keys and values. For the default values of dynamic configuration keys, check the following links: - [Frontend Service](https://github.com/temporalio/temporal/blob/5783e781504d8ffac59f9848b830868f3139b980/service/frontend/service.go#L176) - [History Service](https://github.com/temporalio/temporal/blob/5783e781504d8ffac59f9848b830868f3139b980/service/history/configs/config.go#L309) - [Matching Service](https://github.com/temporalio/temporal/blob/5783e781504d8ffac59f9848b830868f3139b980/service/matching/config.go#L125) - [Worker Service](https://github.com/temporalio/temporal/blob/5783e781504d8ffac59f9848b830868f3139b980/service/worker/service.go#L193) Setting dynamic configuration is optional. Change these values only if you need to override the default values to achieve better performance on your Temporal Cluster. Also, ensure that you test your changes before setting these in production. ## Format To override the default dynamic configuration values, specify your custom values and constraints for the dynamic configuration keys that you want to change in a YAML configuration file. Use the following format when creating your dynamic configuration file. ```yaml testGetBoolPropertyKey: - value: false - value: true constraints: namespace: 'your-namespace' - value: false constraints: namespace: 'your-other-namespace' testGetDurationPropertyKey: - value: '1m' constraints: namespace: 'your-namespace' taskQueueName: 'longIdleTimeTaskqueue' testGetFloat64PropertyKey: - value: 12.0 constraints: namespace: 'your-namespace' testGetMapPropertyKey: - value: key1: 1 key2: 'value 2' key3: - false - key4: true key5: 2.0 ``` ### Constraints You can define constraints on some dynamic configuration keys to set specific values that apply on a Namespace or Task Queue level. Not defining constraints on a dynamic configuration key sets the values across the Cluster. - To set global values for the configuration key with no constraints, use the following: ```yaml frontend.globalNamespaceRPS: # Total per-Namespace RPC rate limit applied across the Cluster. - value: 5000 ``` - For keys that can be customized at Namespace level, you can specify multiple values for different Namespaces in addition to one default value that applies globally to all Namespaces. To set values at a Namespace level, use `namespace` (String) as shown in the following example. ```yaml frontend.persistenceNamespaceMaxQPS: # Rate limit on the number of queries the Frontend sends to the Persistence store. - constraints: {} # Sets default value that applies to all Namespaces value: 2000 # The default value for this key is 0. - constraints: { namespace: 'namespace1' } # Sets limit on number of queries that can be sent from "namespace1" Namespace to the Persistence store. value: 4000 - constraints: { namespace: 'namespace2' } value: 1000 ``` - For keys that can be customized at a Task Queue level, you can specify Task Queue name and Task type in addition to Namespace. To set values at a Task Queue level, use `taskQueueName` (String) with `taskType` (optional; supported values: `Workflow` and `Activity`). For example if you have Workflow Executions creating a large number of Workflow and Activity tasks per second, you can add more partitions to your Task Queues (default is 4) to handle the high throughput of tasks. To do this, add the following to your dynamic configuration file. Note that if changing the number of partitions, you must set the same count for both read and write operations on Task Queues. ```yaml matching.numTaskqueueReadPartitions: # Number of Task Queue partitions for read operations. - constraints: { namespace: 'namespace1', taskQueueName: 'tq' } # Applies to the "tq" Task Queue for both Workflows and Activities. value: 8 # The default value for this key is 4. Task Queues that need to support high traffic require higher number of partitions. Set these values in accordance to your poller count. - constraints: { namespace: 'namespace1', taskQueueName: 'other-tq', taskType: 'Activity', } # Applies to the "other_tq" Task Queue for Activities specifically. value: 20 - constraints: { namespace: 'namespace2' } # Applies to all task queues in "namespace2". value: 10 - constraints: {} # Applies to all other task queues in "namespace1" and all other Namespaces. value: 16 matching.numTaskqueueWritePartitions: # Number of Task Queue partitions for write operations. - constraints: { namespace: 'namespace1', taskQueueName: 'tq' } # Applies to the "tq" Task Queue for both Workflows and Activities. value: 8 # The default value for this key is 4. Task Queues that need to support high traffic require higher number of partitions. Set these values in accordance to your poller count. - constraints: { namespace: 'namespace1', taskQueueName: 'other-tq', taskType: 'Activity', } # Applies to the "other_tq" Task Queue for Activities specifically. value: 20 - constraints: { namespace: 'namespace2' } # Applies to all task queues in "namespace2". value: 10 - constraints: {} # Applies to all other task queues in "namespace1" and all other Namespaces. value: 16 ``` {/* Note that the values set with most constraints take priority over values that are set with fewer constraints, regardless of the order in which they are set in the dynamic configuration key. */} For more examples on how dynamic configuration is set, see: - [docker-compose](https://github.com/temporalio/docker-compose/tree/main/dynamicconfig) - [samples-server](https://github.com/temporalio/samples-server/blob/main/tls/config/dynamicconfig/development.yaml) ## Commonly used dynamic configuration keys The following table lists commonly used dynamic configuration keys that can be used for rate limiting requests to the Temporal Cluster. Setting dynamic configuration keys is optional. If you choose to update these values for your Temporal Cluster, ensure that you are provisioning enough resources to handle the load. All values listed here are for Temporal server v1.21. Check [server release notes](https://github.com/temporalio/temporal/releases) to verify any potential breaking changes when upgrading your versions. ### Service-level RPS limits The Requests Per Second (RPS) dynamic configuration keys set the rate at which requests can be made to each service in your Cluster. When scaling your services, tune the RPS to test your workload and set acceptable provisioning benchmarks. Exceeding these limits results in `ResourceExhaustedError`. | Dynamic configuration key | Type | Description | Default value | | -------------------------------------- | ---- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | | Frontend | | | | | `frontend.rps` | Int | Rate limit (requests/second) for requests accepted by each Frontend Service host. | 2400 | | `frontend.namespaceRPS` | Int | Rate limit (requests/second) for requests accepted by each Namespace on the Frontend Service. | 2400 | | `frontend.namespaceCount` | Int | Limit on the number of concurrent Task Queue polls per Namespace per Frontend Service host. | 1200 | | `frontend.globalNamespaceRPS` | Int | Rate limit (requests/second) for requests accepted per Namespace, applied across Cluster. The limit is evenly distributed among available Frontend Service instances. If this is set, it overrides the per-instance limit (`frontend.namespaceRPS`). | 0 | | `internal-frontend.globalNamespaceRPS` | Int | Rate limit (requests/second) for requests accepted on each Internal-Frontend Service host applied across the Cluster. | 0 | | History | | | | | `history.rps` | Int | Rate limit (requests/second) for requests accepted by each History Service host. | 3000 | | Matching | | | | | `matching.rps` | Int | Rate limit (requests/second) for requests accepted by each Matching Service host. | 1200 | | `matching.numTaskqueueReadPartitions` | Int | Number of read partitions for a Task Queue. Must be set with `matching.numTaskqueueWritePartitions`. | 4 | | `matching.numTaskqueueWritePartitions` | Int | Number of write partitions for a Task Queue. | 4 | ### QPS limits for Persistence store The Queries Per Second (QPS) dynamic configuration keys set the maximum number of queries a service can make per second to the Persistence store. Persistence store rate limits are evaluated synchronously. Adjust these keys according to your database capacity and workload. If the number of queries made to the Persistence store exceeds the dynamic configuration value, you will see latencies and timeouts on your tasks. | Dynamic configuration key | Type | Description | Default value | | ----------------------------------------- | ---- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | | Frontend | | | | | `frontend.persistenceMaxQPS` | Int | Maximum number queries per second that the Frontend Service host can send to the Persistence store. | 2000 | | `frontend.persistenceNamespaceMaxQPS` | Int | Maximum number of queries per second that each Namespace on the Frontend Service host can send to the Persistence store. If the value set for this config is less than or equal to 0, the value set for `frontend.persistenceMaxQPS` will apply. | 0 | | History | | | | | `history.persistenceMaxQPS` | Int | Maximum number of queries per second that the History host can send to the Persistence store. | 9000 | | `history.persistenceNamespaceMaxQPS` | Int | Maximum number of queries per second for each Namespace that the History host can send to the Persistence store. If the value set for this config is less than or equal to 0, then the value set for `history.persistenceMaxQPS` will apply. | 0 | | Matching | | | | | `matching.persistenceMaxQPS` | Int | Maximum number of queries per second that the Matching Service host can send to the Persistence store. | 9000 | | `matching.persistenceNamespaceMaxQPS` | Int | Maximum number of queries per second that the Matching host can send to the Persistence store for each Namespace. If the value set for this config is less than or equal to 0, the value set for `matching.persistenceMaxQPS` will apply. | 0 | | Worker | | | | | `worker.persistenceMaxQPS` | Int | Maximum number of queries per second that the Worker Service host can send to the Persistence store. | 100 | | `worker.persistenceNamespaceMaxQPS` | Int | Maximum number of queries per second that the Worker host can send to the Persistence store for each Namespace. If the value set for this config is less than or equal to 0, the value set for `worker.persistenceMaxQPS` will apply. | 0 | | Visibility | | | | | `system.visibilityPersistenceMaxReadQPS` | Int | Maximum number queries per second that Visibility database can receive for read operations. | 9000 | | `system.visibilityPersistenceMaxWriteQPS` | Int | Maximum number of queries per second that Visibility database can receive for write operations. | 9000 | ### Activity and Workflow default policy setting You can define default values for Activity and Workflow [Retry Policies](/encyclopedia/retry-policies) at the Cluster level with the following dynamic configuration keys. | Dynamic configuration key | Type | Description | Default value | | ------------------------------------ | ----------------------------- | ----------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | `history.defaultActivityRetryPolicy` | Map (key-value pair elements) | Server configuration for an Activity Retry Policy when it is not explicitly set for the Activity in your code. | [Default values for retry Policy](/encyclopedia/retry-policies#default-values-for-retry-policy) | | `history.defaultWorkflowRetryPolicy` | Map (key-value pair elements) | Retry Policy for unset fields where the user has set an explicit `RetryPolicy`, but not specified all the fields. | [Default values for retry Policy](/encyclopedia/retry-policies#default-values-for-retry-policy) | ### Size limit settings The Persistence store in the Cluster has default size limits set for optimal performance. The dynamic configuration keys relating to some of these are listed below. The default values on these keys are based on extensive testing. You can change these values, but ensure that you are provisioning enough database resources to handle the changed values. For details on platform limits, see the [Temporal Platform limits sheet](/self-hosted-guide/defaults). | Dynamic configuration key | Type | Description | Default value | | --------------------------------------- | ---- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ | | `limit.maxIDLength` | Int | Length limit for various Ids, including: `Namespace`, `TaskQueue`, `WorkflowID`, `ActivityID`, `TimerID`, `WorkflowType`, `ActivityType`, `SignalName`, `MarkerName`, `ErrorReason`/`FailureReason`/`CancelCause`, `Identity`, and `RequestID`. | 1000 | | `limit.blobSize.warn` | Int | Limit, in bytes, for BLOBs size in an Event when a warning is thrown in the server logs. | 512 KB (512 × 1024) | | `limit.blobSize.error` | Int | Limit, in bytes, for BLOBs size in an Event when an error occurs in the transaction. | 2 MB (2 × 1024 × 1024) | | `limit.historySize.warn` | Int | Limit, in bytes, at which a warning is thrown for the Workflow Execution Event History size. | 10 MB (10 × 1024 × 1024) | | `limit.historySize.error` | Int | Limit, in bytes, at which an error occurs in the Workflow Execution for exceeding allowed size. | 50 MB (50 × 1024 × 1024) | | `limit.historyCount.warn` | Int | Limit, in count, at which a warning is thrown for the Workflow Execution Event History size. | 10,240 Events | | `limit.historyCount.error` | Int | Limit, in count, at which an error occurs in the Workflow Execution for exceeding allowed number of Events. | 51,200 events | | `limit.numPendingActivities.error` | Int | Maximum number of pending Activities that a Workflow Execution can have before the `ScheduleActivityTask` fails with an error. | 2000 | | `limit.numPendingSignals.error` | Int | Maximum number of pending Signals that a Workflow Execution can have before the `SignalExternalWorkflowExecution` commands from this Workflow fail with an error. | 2000 | | `history.maximumSignalsPerExecution` | Int | Maximum number of Signals that a Workflow Execution can receive before it throws an `Invalid Argument` error. | 10000 | | `limit.numPendingCancelRequests.error` | Int | Maximum number of pending requests to cancel other Workflows that a Workflow Execution can have before the `RequestCancelExternalWorkflowExecution` commands fail with an error. | 2000 | | `limit.numPendingChildExecutions.error` | Int | Maximum number of pending Child Workflows that a Workflow Execution can have before the `StartChildWorkflowExecution` commands fail with an error. | 2000 | | `frontend.visibilityMaxPageSize` | Int | Maximum number of Workflow Executions shown from the ListWorkflowExecutions API in one page. | 1000 | ### Secondary visibility settings Secondary visibility configuration keys enable Dual Visibility on your Temporal Cluster. This can be useful when migrating a Visibility database or creating a backup Visibility store. | Dynamic configuration key | Type | Description | Default value | | ------------------------------------------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | | `system.enableReadFromSecondaryVisibility` | Boolean | Enables reading from the [secondary visibility store](/dual-visibility), and can be set per Namespace. Allowed values are `true` or `false`. | `false` | | `system.secondaryVisibilityWritingMode` | | Enables writing Visibility data to the secondary Visibility store and can be set per Namespace. Setting this value to `on` disables write operations to the primary Visibility store. Allowed values: `off`: Enables writing to primary Visibility store only. `on`: Enables writing to secondary Visibility store only. `dual`: Enables writing to both primary and secondary Visibility stores. | `off` | ### Server version check settings The Temporal server reports the server version and the version of the SDK that it is connected to in order to determine if the Web UI should show a banner that states a new version is available to install. This can be disabled by defining the following value or by setting the `TEMPORAL_VERSION_CHECK_DISABLED` environment variable to `1`. | Dynamic configuration key | Type | Description | Default value | | ----------------------------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------- | ------------- | | `frontend.enableServerVersionCheck` | Boolean | Enables the Temporal server to report version information about the current server and SDK. Allowed values are `true` or `false`. | `true` | --- ## Errors This reference lists possible [Workflow Task](/tasks#workflow-task) errors and how to resolve them. > For other types of errors, see [Temporal Failures](https://docs.temporal.io/kb/failures). Each of the below errors corresponds with a [WorkflowTaskFailedCause](https://api-docs.temporal.io/#temporal.api.enums.v1.WorkflowTaskFailedCause), which appears in [Events](/workflow-execution/event#event) under `workflow_task_failed_event_attributes`. ## Bad Cancel Timer Attributes {#bad-cancel-timer-attributes} This error indicates that the [Workflow Task](/tasks#workflow-task) failed while attempting to cancel a Timer. {/* TODO add Timer term definition and link to it */} Check your Timer attributes for a missing Timer Id value. Add a valid Timer Id and redeploy the code. ## Bad Cancel Workflow Execution Attributes {#bad-cancel-workflow-execution-attributes} The [Workflow Task](/tasks#workflow-task) failed due to unset [CancelWorkflowExecution](/references/commands#cancelworkflowexecution) attributes. Reset any missing attributes and redeploy the Workflow Task. ## Bad Complete Workflow Execution Attributes {#bad-complete-workflow-execution-attributes} This error indicates that the [Workflow Task](/tasks#workflow-task) failed due to unset attributes on [CompleteWorkflowExecution](/references/commands#completeworkflowexecution). Reset any missing attributes. Adjust the size of your Payload if it exceeds size limits. ## Bad Continue as New Attributes {#bad-continue-as-new-attributes} This error indicates that the [Workflow Task](/tasks#workflow-task) failed to validate a [ContinueAsNew](/references/commands#continueasnewworkflowexecution) attribute. The attribute could be unset or invalid. Reset any missing attributes. If the payload or memo exceeded size limits, adjust the input size. Check that the [Workflow](/workflows) is validating search attributes after unaliasing keys. ## Bad Fail Workflow Execution Attributes {#bad-fail-workflow-execution-attributes} This error indicates that the [Workflow Task](/tasks#workflow-task) failed due to unset [FailWorkflowExecution](/references/commands#failworkflowexecution) attributes. If you encounter this error, make sure that `StartToClostTimeout` or `ScheduleToCloseTimeout` are set. Restart the [Worker](/workers) that the [Workflow](/workflows) and [Activity](/activities) are registered to. ## Bad Modify Workflow Properties Attributes {#bad-modify-workflow-properties-attributes} This error indicates that the [Workflow Task](/tasks#workflow-task) failed to validate attributes on a property in the Upsert Memo or in a payload. These attributes are either unset or exceeding size limits. Reset any unset and empty attributes. Adjust the size of the [Memo](/workflow-execution#memo) or payload to fit within the system's limits. ## Bad Record Marker Attributes {#bad-record-marker-attributes} This error indicates that the [Workflow Task](/tasks#workflow-task) failed due to an unset or incorrect [Marker](/references/events#markerrecorded) name. Enter a valid Marker name and redeploy the Task. ## Bad Request Cancel Activity Attributes {#bad-request-cancel-activity-attributes} This error either indicates the possibility of unset attributes for [RequestCancelActivity](/references/commands#requestcancelactivitytask), or an invalid History Builder state. Update the [Temporal SDK](/encyclopedia/temporal-sdks) to the most recent release. Reset any unset attributes before retrying the [Workflow Task](/tasks#workflow-task). If you continue to see this error, review your code for [nondeterministic causes](/workflow-definition#non-deterministic-change). ## Bad Request Cancel External Workflow Execution Attributes {#bad-request-cancel-external-workflow-execution} This error indicates that the [Workflow Task](/tasks#workflow-task) failed while trying to cancel an external Workflow. Unset or invalid attributes can cause this to occur. Reset any missing attributes, such as Workflow Id or Run Id. Adjust any fields that exceed length limits. If [Child Workflow](/child-workflows) is set to `Start` and `RequestCancel`, remove one of these attributes. A Child Workflow cannot perform both actions in the same Workflow Task. ## Bad Schedule Activity Attributes {#bad-schedule-activity-attributes} This error indicates unset or invalid attributes for [`ScheduleActivityTask`](/references/commands#scheduleactivitytask) or [`CompleteWorkflowExecution`](/references/commands#completeworkflowexecution). Reset any unset or empty attributes. Adjust the size of the received payload to stay within the given size limit. ## Bad Schedule Nexus Operation Attributes This error indicates unset or invalid attributes for ScheduleNexusOperation, for example if the Nexus Endpoint name used in the caller Workflow doesn't exist. Inspect the reason given in the error for mitigation when possible. ## Bad Search Attributes {#bad-search-attributes} This error indicates that the [Workflow Task](/tasks#workflow-task) has unset or invalid [Search Attributes](/search-attribute). This can cause Workflow Tasks to continue to retry without success. Make sure that all attributes are defined before retrying the Task. Adjust the size of the Payload to fit within the system's size limits. ## Bad Signal Input Size {#bad-signal-input-size} This error indicates that the Payload has exceeded the [Signal's](/sending-messages#sending-signals) available input size. Adjust the size of the Payload, and redeploy the [Workflow Task](/tasks#workflow-task). ## Bad Signal Workflow Execution Attributes {#bad-signal-workflow-execution-attributes} This error indicates that the [Workflow Task](/tasks#workflow-task) failed to validate attributes for [SignalExternalWorkflowExecution](/references/commands#signalexternalworkflowexecution). Reset any unset, missing, nil, or invalid attributes. Adjust the input to fit within the system's size limits. ## Bad Start Child Execution Attributes {#bad-start-child-execution-attributes} This error indicates that the [Workflow Task](/tasks#workflow-task) failed to validate attributes for [`StartChildWorkflowExecution`](/references/commands#startchildworkflowexecution) Adjust the input size of the attributes to fall within the system's size limits. Make sure that [Search Attribute](/search-attribute) validation is performed after unaliasing keys. ## Bad Start Timer Attributes {#bad-start-timer-attributes} This error indicates that the scheduled [Event](/workflow-execution/event#event) is missing a Timer Id. {/* TODO add Timer Id as anchor for term and link to it */} Set a valid Timer Id and retry the [Workflow Task](/tasks#workflow-task). ## Cause Bad Binary {#cause-bad-binary} This error indicates that the [Worker](/workers) deployment returned a bad binary checksum. {/* TODO: get more information about binary */} ## Cause Bad Update {#cause-bad-update} {/* TODO: add link to Workflow Update page when written */} This error indicates that a [Workflow Execution](/workflow-execution) tried to complete before receiving an Update. `BadUpdate` can happen when a [Worker](/workers#worker) generates a [Workflow Task Completed](/references/events#workflowtaskcompleted) message with missing fields or an invalid Update response format. This error might indicate usage of an unsupported SDK. Make sure you're using a [supported SDK](/encyclopedia/temporal-sdks). ## Cause Reset Workflow {#cause-reset-workflow} This error indicates that the [Workflow Task](/tasks#workflow-task) failed due to a request to reset the [Workflow](/workflows). If the system hasn't started a new Workflow, manually reset the Workflow. ## Cause Unhandled Update {#cause-unhandled-update} `UnhandledUpdate` occurs when a Workflow Update is received by the Temporal Server while a Workflow Task being processed on a Worker produces a Command that would cause the Workflow to transition to a closed state. Temporal rejects the Workflow Task completion to guarantee that the Update is eventually handled by Workflow code and rewinds the Workflow so it can handle the pending Update. This error can happen when the Workflow receives frequent Updates. ## Cause Unspecified {#cause-unspecified} This error indicates that the [Workflow Task](/tasks#workflow-task) has failed for an unknown reason. If you see this error, examine your Workflow Definition. ## Failover Close Command {#failover-close-command} This error indicates that a [Namespace](/namespaces) failover forced the [Workflow Task](/tasks#workflow-task) to close. The system automatically schedules a retry when this error occurs. {/* TODO: troubleshooting */} ## Force Close Command {#force-close-command} This error indicates that the [Workflow Task](/tasks#workflow-task) was forced to close. A retry will be scheduled if the error is recoverable. {/* TODO: more info */} ## Nondeterminism Error {#non-deterministic-error} The [Workflow Task](/tasks#workflow-task) failed due to a [nondeterminism error](/workflow-definition#non-deterministic-change). {/* TODO: info */} ## Pending Activities Limit Exceeded {#pending-activities-limit-exceeded} The [Workflow](/workflows) has reached capacity for pending [Activities](/activities). Therefore, the [Workflow Task](/tasks#workflow-task) was failed to prevent the creation of another Activity. Let the Workflow complete any current Activities before redeploying the code. ## Pending Child Workflows Limit Exceeded {#pending-child-workflows-limit-exceeded} This error indicates that the [Workflow](/workflows) has reached capacity for pending [Child Workflows](/child-workflows). Therefore, the [Workflow Task](/tasks#workflow-task) was failed to prevent additional Child Workflows from being added. Wait for the system to finish any currently running Child Workflows before redeploying this Task. ## Pending Nexus Operations Limit Exceeded {#pending-nexus-operations-limit-exceeded} The Workflow has reached capacity for pending Nexus Operations. Therefore, the Workflow Task was failed to prevent the creation of another Nexus Operation. Let the Workflow complete any current Nexus Operation before retrying the Task. See [Per Workflow Nexus Operation Limits](/cloud/limits#per-workflow-nexus-operation-limits) for details. ## Pending Request Cancel Limit Exceeded {#pending-request-cancel-limit-exceeded} This error indicates that the [Workflow Task](/tasks#workflow-task) failed after attempting to add more cancel requests. The [Workflow](/workflows) has reached capacity for pending requests to cancel other Workflows, and cannot accept more requests. If you see this error, give the system time to process pending requests before retrying the Task. ## Pending Signals Limit Exceeded {#pending-signals-limit-exceeded} The Workflow has reached capacity for pending Signals. Therefore, the [Workflow Task](/tasks#workflow-task) was failed after attempting to add more [Signals](/sending-messages#sending-signals) to an external Workflow. Wait for Signals to be processed by the Workflow before retrying the Task. ## Reset Sticky Task Queue {#reset-sticky-task-queue} This error indicates that the Sticky [Task Queue](/task-queue) needs to be reset. If you see this error, reset the Sticky Task Queue. The system will retry automatically. ## Resource Exhausted Cause Concurrent Limit {#resource-exhausted-cause-concurrent-limit} This error indicates that the concurrent [poller count](/develop/worker-performance#poller-count) has been exhausted. {/* TODO: more info needed */} Adjust the poller count per [Worker](/workers). ## Resource Exhausted Cause Persistence Limit {#resource-exhausted-cause-persistence-limit} This error indicates that the persistence rate limit has been reached. {/* TODO: more info needed */} ## Resource Exhausted Cause RPS Limit {#resource-exhausted-cause-rps-limit} This error indicates that the [Workflow](/workflows) has exhausted its RPS limit. {/* TODO: more info needed */} ## Resource Exhausted Cause System Overload {#resource-exhausted-cause-system-overload} This error indicates that the system is overloaded and cannot allocate further resources to [Workflow Tasks](/tasks#workflow-task). {/* TODO: more info needed */} ## Resource Exhausted Cause Unspecified {#resource-exhausted-cause-unspecified} This error indicates that an unknown cause is preventing resources from being allocated to further [Workflow Tasks](/tasks#workflow-task). {/* TODO: more info needed */} ## Schedule Activity Duplicate Id {#schedule-activity-duplicate-id} The [Workflow Task](/tasks#workflow-task) failed because the [Activity](/activities) Id is already in use. Check your code to see if you've already specified the same Activity Id in your [Workflow](/workflows). Enter another Activity Id, and try running the Workflow Task again. ## Start Timer Duplicate Id {#start-timer-duplicate-id} This error indicates that a Timer with the given Timer Id has already started. {/* TODO link to Timer term when exists */} Try entering a different Timer Id, and retry the [Workflow Task](/tasks#workflow-task). ## Unhandled Command {#unhandled-command} This error indicates new available [Events](/references/events) since the last [Workflow Task](/tasks#workflow-task) started. The Workflow Task was failed because the [Workflow](/workflows) attempted to close itself without handling the new Events. `UnhandledCommand` can happen when the Workflow is receiving a high number of [Signals](/sending-messages#sending-signals). If the Workflow doesn't have enough time to handle these Signals, a RetryWorkflow Task is scheduled to handle these new Events. To prevent this error, drain the Signal Channel with the ReceiveAsync function. If you continue to see this error, check your logs for failing Workflow Tasks. The Workflow may have been picked up by a different [Worker](/workers#worker). ## Workflow Worker Unhandled Failure {#workflow-worker-unhandled-failure} This error indicates that the [Workflow Task](/tasks#workflow-task) encountered an unhandled failure from the [Workflow Definition](/workflow-definition). {/* TODO: more info needed */} --- ## Temporal Events reference [Events](/workflow-execution/event#event) are created by the [Temporal Service](/temporal-service) in response to external occurrences and [Commands](/workflow-execution#command) generated by a [Workflow Execution](/workflow-execution). All possible Events that could appear in a Workflow Execution [Event History](/workflow-execution/event#event-history) are listed below. ### WorkflowExecutionStarted This is always the first [Event](/workflow-execution/event#event) in a Workflow Execution Event History. It indicates that the Temporal Service received a request to spawn the Workflow Execution. | Field | Description | | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- | | workflow_type | The [Name](/workflow-definition#workflow-type) of [Workflow](/workflows) that was initiated. | | parent_workflow_namespace | The [Namespace](/namespaces) of the Parent [Workflow Execution](/workflow-execution), if applicable. | | parent_workflow_execution | Identifies the parent Workflow and the execution run. | | parent_initiated_event_id | Id of the [StartWorkflowExecutionInitiated](#startchildworkflowexecutioninitiated) Event this Event corresponds to. | | task_queue | The [Task Queue](/task-queue) that this [Workflow Task](/tasks#workflow-task) was enqueued in. | | input | Information that is deserialized by the SDK to provide arguments to the Workflow. | | workflow_execution_timeout | The total timeout period for a [Workflow Execution](/workflow-execution), including retries and continue-as-new. | | workflow_run_timeout | Timeout of a single Workflow run. | | workflow_task_timeout | Timeout of a single Workflow Task. | | continued_execution_run_id | [Run Id](/workflow-execution/workflowid-runid#run-id) of the previous Workflow which continued-as-new, retried or was executed by Cron into this Workflow. | | initiator | Allows the Workflow to continue as a new Workflow Execution. | | continued_failure | Serialized result of a failure. | | last_completion_result | Information from the previously completed [Task](/tasks#task), if applicable. | | original_execution_run_id | The [Run Id](/workflow-execution/workflowid-runid#run-id) of the original Workflow started. | | identity | The Id of the [Client](/self-hosted-guide/security#client-connections) or parent Workflow [Worker](/workers#worker) that requested the start of this Workflow. | | first_execution_run_id | The first [Run Id](/workflow-execution/workflowid-runid#run-id), along the chain of [Continue-As-New](/workflow-execution/continue-as-new) Runs and Reset. | | retry_policy | The amount of retries as determined by the service's dynamic configuration. Retries will happen until 'schedule_to_close_timeout' is reached. | | attempt | The number of attempts that have been made to complete this Task. | | workflow_execution_expiration_time | The absolute time at which the Workflow Execution will [time out](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout). | | cron_schedule | Displays the Workflow's [Cron Schedule](/cron-job), if applicable. | | first_workflow_task_backoff | Contains the amount of time between when this iteration of the Workflow was scheduled, and when it should run next. Applies to Cron Scheduling. | | memo | Non-indexed information to show in the Workflow. | | search_attributes | Provides data for setting up a Workflow's [Search Attributes](/search-attribute). | | prev_auto_reset_points | | | header | Information passed by the sender of the [Signal](/sending-messages#sending-signals) that is copied into the [Workflow Task](/tasks#workflow-task). | | completion_callbacks | Completion callbacks attached when this workflow was started. | ### WorkflowExecutionCompleted This indicates that the [Workflow Execution](/workflow-execution) has successfully completed. The [Event](/workflow-execution/event#event) contains Workflow Execution results. | Field | Description | | -------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | | result | Serialized result of completed [Workflow](/workflows). | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | | new_execution_run_id | The [Run Id](/workflow-execution/workflowid-runid#run-id) of the new Workflow Execution started as a result of a [Cron Schedule](/cron-job). | ### WorkflowExecutionFailed This [Event](/workflow-execution/event#event) indicates that the [Workflow Execution](/workflow-execution) has unsuccessfully completed and contains the Workflow Execution error. | Field | Description | | -------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | failure | Serialized result of a [Workflow](/workflows) failure. | | retry_state | The reason provided for whether the [Task](/tasks#task) should or shouldn't be retried. | | workflow_task_completed_event_id | The [Run Id](/workflow-execution/workflowid-runid#run-id) of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | | new_execution_run_id | The [Run Id](/workflow-execution/workflowid-runid#run-id) of the new Workflow started by Cron or [Retry](/encyclopedia/retry-policies). | ### WorkflowExecutionTimedOut This [Event](/workflow-execution/event#event) type indicates that the [Workflow Execution](/workflow-execution) has timed out by the [Temporal Server](/temporal-service/temporal-server) due to the [Workflow](/workflows) having not been completed within [timeout](/encyclopedia/detecting-workflow-failures#workflow-execution-timeout) settings. | Field | Description | | -------------------- | --------------------------------------------------------------------------------------------------------------------------------------- | | retry_state | The reason provided for whether the [Task](/tasks#task) should or shouldn't be retried. | | new_execution_run_id | The [Run Id](/workflow-execution/workflowid-runid#run-id) of the new Workflow started by Cron or [Retry](/encyclopedia/retry-policies). | ### WorkflowExecutionCancelRequested This [Event](/workflow-execution/event#event) type indicates that a request has been made to cancel the [Workflow Execution](/workflow-execution). | Field | Description | | --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | | cause | The user-provided reason for the cancelation request. | | external_initiated_event_id | The [Run Id](/workflow-execution/workflowid-runid#run-id) of the Event in the [Workflow](/workflows) that requested cancelation, if applicable. | | external_workflow_execution | Identifies the external Workflow and the run of the its execution. | | identity | Id of the [Worker](/workers#worker) that requested cancelation. | ### WorkflowExecutionCanceled This [Event](/workflow-execution/event#event) type indicates that the client has confirmed the cancelation request and the [Workflow Execution](/workflow-execution) has been canceled. | Field | Description | | -------------------------------- | ----------------------------------------------------------------------------------------------- | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | | details | Additional information reported by the [Workflow](/workflows) upon cancelation. | ### WorkflowExecutionSignaled This [Event](/workflow-execution/event#event) type indicates the [Workflow](/workflows) has received a [Signal](/sending-messages#sending-signals) Event. The Event type contains the Signal name and a Signal payload. | Field | Description | | ----------- | ------------------------------------------------------------------------------------------------------------- | | signal_name | The name/type of Signal to be fired. | | input | Information that is deserialized by the SDK to provide arguments to the Workflow function. | | identity | Identifies the [Worker](/workers#worker) that signaled to the Workflow. | | header | Information passed by the sender of the Signal that is copied into the [Workflow Task](/tasks#workflow-task). | ### WorkflowExecutionTerminated This [Event](/workflow-execution/event#event) type indicates that the [Workflow Execution](/workflow-execution) has been forcefully terminated and that likely the terminate Workflow API was called. | Field | Description | | -------- | -------------------------------------------------------------------- | | reason | Information provided by the user or client for Workflow termination. | | details | Additional information reported by the Workflow upon termination. | | identity | Identifies the Worker that requested termination. | ### WorkflowExecutionContinuedAsNew This [Event](/workflow-execution/event#event) type indicates that the Workflow has successfully completed, and a new Workflow has been started within the same transaction. This Event type contains last [Workflow Execution](/workflow-execution) results as well as new Workflow Execution inputs. | Field | Description | | -------------------------------- | -------------------------------------------------------------------------------------------------------------------- | | new_execution_run_id | The [Run Id](/workflow-execution/workflowid-runid#run-id) of the new Workflow started by this Continue-As-New Event. | | workflow_type | The name/type of Workflow that was started by this Event. | | task_queue | The [Task Queue](/task-queue) that this [Workflow Task](/tasks#workflow-task) was enqueued in. | | input | Information that is deserialized by the SDK to provide arguments to the Workflow. | | workflow_run_timeout | Timeout of a single Workflow run. | | workflow_task_timeout | Timeout of a single Workflow Task. | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event command was reported with. | | backoff_start_interval | The amount of time to delay the beginning of the [ContinuedAsNew](#workflowexecutioncontinuedasnew) Workflow. | | initiator | Allows the Workflow to continue as a new execution. | | last_completion_result | Information passed by the previously completed Task to the ongoing execution. | | header | Information passed by the sender of the Signal that is copied into the Workflow Task. | | memo | Non-indexed information to show in the Workflow. | | search_attributes | Provides data for setting up a Workflow's [Search Attributes](/search-attribute). | ### WorkflowExecutionOptionsUpdated This [Event](/workflow-execution/event#event) type indicates that the Workflow options have been updated. The Event type contains updated options such as a versioning override or attached completion callbacks. | Field | Description | | ----------------------------- | ---------------------------------------------------------------------------------------------------------------------- | | versioning_override | Versioning override upserted in this event. Ignored if nil or if unset_versioning_override is true. | | unset_versioning_override | Versioning override removed in this event. | | attached_request_id | Request ID attached to the running workflow execution so subsequent requests with the same request ID will be deduped. | | attached_completion_callbacks | Completion callbacks attached to the running workflow execution. | ### WorkflowTaskScheduled This [Event](/workflow-execution/event#event) type indicates that the [Workflow Task](/tasks#workflow-task) has been scheduled. The SDK client should now be able to process any new history events. | Field | Description | | ---------------------- | ------------------------------------------------------------------------------------------ | | task_queue | The [Task Queue](/task-queue) that this Workflow Task was enqueued in. | | start_to_close_timeout | The time that the [Worker](/workers#worker) takes to process this Task once it's received. | | attempt | The number of attempts that have been made to complete this Task. | ### WorkflowTaskStarted This [Event](/workflow-execution/event#event) type indicates that the [Workflow Task](/tasks#workflow-task) has started. The SDK client has picked up the Workflow Task and is processing new history events. | Field | Description | | ------------------ | ----------------------------------------------------------------------------------------------------------- | | scheduled_event_id | The Id of the [WorkflowTaskScheduled](#workflowtaskscheduled) Event that this Workflow Task corresponds to. | | identity | Identifies the [Worker](/workers#worker) that started this Task. | | request_id | Identifies the Workflow Task request. | ### WorkflowTaskCompleted This [Event](/workflow-execution/event#event) type indicates that the [Workflow Task](/tasks#workflow-task) completed. | Field | Description | | ------------------ | ----------------------------------------------------------------------------------------------------------- | | scheduled_event_id | The Id of the [WorkflowTaskScheduled](#workflowtaskscheduled) Event that this Workflow Task corresponds to. | | started_event_id | The Id of the [WorkflowTaskStarted](#workflowtaskstarted) Event that this Task corresponds to. | | identity | Identity of the [Worker](/workers#worker) that completed this Task. | | binary_checksum | Binary Id of the Worker that completed this Task. | The SDK client picked up the Workflow Task, processed new history events, and may or may not ask the [Temporal Server](/temporal-service/temporal-server) to do additional work. It is possible for the following events to still occur: - [ActivityTaskScheduled](#activitytaskscheduled) - [TimerStarted](#timerstarted) - [UpsertWorkflowSearchAttributes](#upsertworkflowsearchattributes) - [MarkerRecorded](#markerrecorded) - [StartChildWorkflowExecutionInitiated](#startchildworkflowexecutioninitiated) - [RequestCancelExternalWorkflowExecutionInitiated](#requestcancelexternalworkflowexecutioninitiated) - [SignalExternalWorkflowExecutionInitiated](#signalexternalworkflowexecutioninitiated) - [WorkflowExecutionCompleted](#workflowexecutioncompleted) - [WorkflowExecutionFailed](#workflowexecutionfailed) - [WorkflowExecutionCanceled](#workflowexecutioncanceled) - [WorkflowExecutionContinuedAsNew](#workflowexecutioncontinuedasnew) ### WorkflowTaskTimedOut This [Event](/workflow-execution/event#event) type indicates that the [Workflow Task](/tasks#workflow-task) encountered a [timeout](/encyclopedia/detecting-workflow-failures#workflow-task-timeout). Either an SDK client with a local cache was not available at the time, or it took too long for the SDK client to process the Task. | Field | Description | | ------------------ | ----------------------------------------------------------------------------------------------------------- | | scheduled_event_id | The Id of the [WorkflowTaskScheduled](#workflowtaskscheduled) Event that this Workflow Task corresponds to. | | started_event_id | The Id of the [WorkflowTaskStarted](#workflowtaskstarted) Event that this Task corresponds to. | | timeout_type | The type of timeout that has occurred. | ### WorkflowTaskFailed This [Event](/workflow-execution/event#event) type indicates that the [Workflow Task](/tasks#workflow-task) encountered a failure. Usually this means that the Workflow was non-deterministic. However, the Workflow reset functionality also uses this Event. | Field | Description | | ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------- | | scheduled_event_id | The Id of the [WorkflowTaskScheduled](#workflowtaskscheduled) Event that this Workflow Task corresponds to. | | started_event_id | The Id of the [WorkflowTaskStarted](#workflowtaskstarted) Event that this Workflow Task corresponds to. | | failure | Details for the Workflow Task's failure. | | identity | The identity of the [Worker](/workers#worker) that failed this Task. The Worker must be explicitly defined to return a value for this field. | | base_run_id | The original [Run Id](/workflow-execution/workflowid-runid#run-id) of the Workflow. | | new_run_id | The Run Id of the reset Workflow. | | fork_event_version | Identifies the Event version that was forked off to the reset Workflow. | | binary_checksum | The Binary Id of the Worker that failed this Task. The Worker must be explicitly defined to return a value for this field. | ### ActivityTaskScheduled This [Event](/workflow-execution/event#event) type indicates that an [Activity Task](/tasks#activity-task) was scheduled. The SDK client should pick up this Activity Task and execute. This Event type contains Activity inputs, as well as Activity Timeout configurations. | Field | Description | | -------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | activity_id | The identifier assigned to this Activity by a [Worker](/workers#worker) or user. | | activity_type | The [type of Activity](/activity-definition#activity-type) that was scheduled. | | namespace | Namespace of the Workflow that the [Activity](/activities) resides in. | | task_queue | The [Task Queue](/task-queue) that this Activity Task was enqueued in. | | header | Information passed by the sender of the [Signal](/sending-messages#sending-signals) that is copied into the [Workflow Task](/tasks#workflow-task). | | input | Information that is deserialized by the SDK to provide arguments to the [Workflow](/workflows) function. | | schedule_to_close_timeout | The amount of time that a caller will wait for Activity completion. Limits the amount of time that retries will be attempted for this Activity. | | schedule_to_start_timeout | Limits the time that an Activity Task can stay in a Task Queue. This timeout cannot be retried. | | start_to_close_timeout | Maximum amount of execution time that an Activity is allowed after being picked up by a Worker. This timeout is retryable. | | heartbeat_timeout | Maximum amount of time allowed between successful Worker heartbeats. | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | | retry_policy | The amount of retries as determined by the service's dynamic configuration. Retries will happen until `schedule_to_close_timeout` is reached. | ### ActivityTaskStarted This [Event](/workflow-execution/event#event) type indicates that an [Activity Task Execution](/tasks#activity-task-execution) was started. The SDK Worker picked up the Activity Task and started processing the [Activity](/activities) invocation. Note, however, that this Event is not written to History until the terminal Event (like [ActivityTaskCompleted](#activitytaskcompleted) or [ActivityTaskFailed](#activitytaskfailed)) occurs. | Field | Description | | ------------------ | -------------------------------------------------------------------------------------------------------------------- | | scheduled_event_id | The Id of the [ActivityTaskScheduled](#activitytaskscheduled) Event that this Task corresponds to. | | identity | Identifies the [Worker](/workers#worker) that started the Task. | | request_id | Identifies the Activity Task request. | | attempt | The number of attempts that have been made to complete this Task. | | last_failure | Details from the most recent failure Event. Only assigned values if the Task has previously failed and been retried. | ### ActivityTaskCompleted This [Event](/workflow-execution/event#event) type indicates that the [Activity Task](/tasks#activity-task) has completed. The SDK client has picked up and successfully completed the Activity Task. This Event type contains [Activity Execution](/activity-execution) results. | Field | Description | | ------------------ | -------------------------------------------------------------------------------------------------------------- | | result | Serialized result of a completed [Activity](/activities). | | scheduled_event_id | The Id of the [ActivityTaskScheduled](#activitytaskscheduled) Event that this completion Event corresponds to. | | started_event_id | The Id of the [ActivityTaskStarted](#activitytaskstarted) Event that this Task corresponds to. | | identity | Identity of the [Worker](/workers#worker) that completed this Task. | ### ActivityTaskFailed This [Event](/workflow-execution/event#event) type indicates that the [Activity Task](/tasks#activity-task) has failed. The SDK client picked up the Activity Task but unsuccessfully completed it. This Event type contains [Activity Execution](/activity-execution) errors. | Field | Description | | ------------------ | ----------------------------------------------------------------------------------------------------------- | | failure | Serialized result of a [Workflow](/workflows) failure. | | scheduled_event_id | The Id of the [ActivityTaskScheduled](#activitytaskscheduled) Event that this failure Event corresponds to. | | started_event_id | The Id of the [ActivityTaskStarted](#activitytaskstarted) Event that this failure corresponds to. | | retry_state | The reason provided for whether the Task should or shouldn't be retried. | ### ActivityTaskTimedOut This [Event](/workflow-execution/event#event) type indicates that the Activity has timed out according to the [Temporal Server](/temporal-service/temporal-server), due to one of these [Activity](/activities) timeouts: [Schedule-to-Close Timeout](/encyclopedia/detecting-activity-failures#schedule-to-close-timeout) and [Schedule-to-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout). | Field | Description | | ------------------ | ----------------------------------------------------------------------------------------------------------- | | failure | Serialized result of a [Workflow](/workflows) failure. | | scheduled_event_id | The Id of the [ActivityTaskScheduled](#activitytaskscheduled) Event that this timeout Event corresponds to. | | started_event_id | The Id of the [ActivityTaskStarted](#activitytaskstarted) Event that this timeout corresponds to. | | retry_state | The reason provided for whether the Task should or shouldn't be retried. | | timeout_type | The type of timeout that led to this Event, e.g., Start-to-Close, Schedule-to-Close, Schedule-to-Start. | You can run a Workflow containing an Activity Execution that takes longer than the Start-to-Close Timeout you set and use a RetryPolicy that sets MaxAttempts to 1 so it does not retry indefinitely. When the Activity times out, you will observe that the `ActivityTaskTimedOut` Event contains other attributes missing from the documentation, including the type of timeout that led to the Event. ### ActivityTaskCancelRequested This [Event](/workflow-execution/event#event) type indicates that a request to [cancel](/activity-execution#cancellation) the [Activity](/activities) has occurred. | Field | Description | | -------------------------------- | ---------------------------------------------------------------------------------------------------------- | | scheduled_event_id | The Id of the [ActivityTaskScheduled](#activitytaskscheduled) Event that this cancel Event corresponds to. | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | ### ActivityTaskCanceled This [Event](/workflow-execution/event#event) type indicates that the [Activity](/activities) has been [canceled](/activity-execution#cancellation). | Field | Description | | -------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | | details | Additional information reported by the Activity upon confirming cancelation. | | latest_cancel_requested_event_id | Id of the most recent [ActivityTaskCancelRequested](#activitytaskcancelrequested) Event which refers to the same Activity. | | scheduled_event_id | The Id of the [ActivityTaskScheduled](#activitytaskscheduled) Event that this cancelation corresponds to. | | started_event_id | The Id of the [ActivityTaskStarted](#activitytaskstarted) Event that this cancelation corresponds to. | | identity | Identifies the [Worker](/workers#worker) that requested cancelation. | ### TimerStarted This [Event](/workflow-execution/event#event) type indicates a timer has started. | Field | Description | | -------------------------------- | ----------------------------------------------------------------------------------------------- | | timer_id | The Id assigned for the timer by a [Worker](/workers#worker) or user. | | start_to_fire_timeout | Amount of time to elapse before the timer fires. | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | ### TimerFired This [Event](/workflow-execution/event#event) type indicates a timer has fired. | Field | Description | | ---------------- | --------------------------------------------------------------------- | | timer_id | The Id assigned for the timer by a [Worker](/workers#worker) or user. | | started_event_id | The Id of the [TimerStarted](#timerstarted) Event itself. | ### TimerCanceled This [Event](/workflow-execution/event#event) type indicates a Timer has been canceled. | Field | Description | | -------------------------------- | ----------------------------------------------------------------------------------------------- | | timer_id | The Id assigned for the timer by a [Worker](/workers#worker) or user. | | started_event_id | The Id of the [TimerStarted](#timerstarted) Event itself. | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | ### RequestCancelExternalWorkflowExecutionInitiated This [Event](/workflow-execution/event#event) type indicates that a [Workflow](/workflows) has requested that the [Temporal Server](/temporal-service/temporal-server) try to cancel another Workflow. | Field | Description | | -------------------------------- | ----------------------------------------------------------------------------------------------- | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | | namespace | [Namespace](/namespaces) of the Workflow that`s going to be signaled for execution. | | workflow_execution | Identifies the Workflow and the run of the [Workflow Execution](/workflow-execution). | | child_workflow_only | Set to true if this Workflow is a child of the Workflow which issued the cancelation request. | | reason | Information provided by the user or client for Workflow cancelation. | ### RequestCancelExternalWorkflowExecutionFailed This [Event](/workflow-execution/event#event) type indicates that [Temporal Server](/temporal-service/temporal-server) could not cancel the targeted [Workflow](/workflows). This is usually because the target Workflow could not be found. | Field | Description | | -------------------------------- | ----------------------------------------------------------------------------------------------- | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | | namespace | [Namespace](/namespaces) of the Workflow that failed to cancel. | | workflow_execution | Identifies the Workflow and the run of the [Workflow Execution](/workflow-execution). | | initiated_event_id | Id of the [RequestCancelExternalWorkflowExecutionInitiated] Event this failure corresponds to. | ### ExternalWorkflowExecutionCancelRequested This [Event](/workflow-execution/event#event) type indicates that the [Temporal Server](/temporal-service/temporal-server) has successfully requested the cancelation of the target [Workflow](/workflows). | Field | Description | | ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | | initiated_event_id | Id of the [RequestCancelExternalWorkflowExecutionInitiated](#requestcancelexternalworkflowexecutioninitiated) Event that this cancelation request corresponds to. | | namespace | [Namespace](/namespaces) of the Workflow that was requested to cancel. | | workflow_execution | Identifies the Workflow and the run of the [Workflow Execution](/workflow-execution). | ### ExternalWorkflowExecutionSignaled This [Event](/workflow-execution/event#event) type indicates that the [Temporal Server](/temporal-service/temporal-server) has successfully [Signaled](/sending-messages#sending-signals) the targeted [Workflow](/workflows). | Field | Description | | ------------------ | -------------------------------------------------------------------------------------------------------------------------------- | | initiated_event_id | Id of the [SignalExternalWorkflowExecutionInitiated](#signalexternalworkflowexecutioninitiated) Event this Event corresponds to. | | namespace | [Namespace](/namespaces) of the Workflow that was signaled to. | | workflow_execution | Identifies the Workflow and the run of the [Workflow Execution](/workflow-execution). | ### MarkerRecorded This [Event](/workflow-execution/event#event) type is transparent to the [Temporal Server](/temporal-service/temporal-server). The Server will only store it and will not try to understand it. The SDK client may use it for local activities or side effects. | Field | Description | | -------------------------------- | ------------------------------------------------------------------------------------------------------------------- | | marker_name | Identifies various markers. | | details | Serialized information recorded in the marker. | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | | header | Information passed by the sender of the [Signal](/sending-messages#sending-signals) that is copied into the marker. | | failure | Serialized result of a [Workflow](/workflows) failure. | ### StartChildWorkflowExecutionInitiated This [Event](/workflow-execution/event#event) type indicates that the [Temporal Server](/temporal-service/temporal-server) will try to start a Child Workflow. | Field | Description | | ------------- | ----------------------------------------------- | | namespace | [Namespace](/namespaces) of the Child Workflow. | | workflow_id | Identifies the Child Workflow. | | workflow_type | The name/type of Workflow that was initiated. | ### StartChildWorkflowExecutionFailed This [Event](/workflow-execution/event#event) type indicates a [Child Workflow Execution](/child-workflows) cannot be started / triggered. It is usually due to a Child Workflow Id collision. | Field | Description | | -------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | | namespace | [Namespace](/namespaces) of the Child Workflow. | | workflow_id | Identifies the Child Workflow. | | workflow_type | The name/type of Workflow that has failed. | | initiated_event_id | Id of the [StartChildWorkflowExecutionInitiated](#startchildworkflowexecutioninitiated) Event this Event corresponds to. | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | ### ChildWorkflowExecutionStarted This [Event](/workflow-execution/event#event) type indicates a [Child Workflow Execution](/child-workflows) has successfully started / triggered. This would also cause the [WorkflowExecutionStarted](#workflowexecutionstarted) to be recorded for the Workflow that has started. | Field | Description | | ------------------ | -------------------------------------------------------------------------------------------------------------------------------- | | namespace | [Namespace](/namespaces) of the Child Workflow. | | initiated_event_id | Id of the [StartChildWorkflowExecutionInitiated](#startchildworkflowexecutioninitiated) Event this Event corresponds to. | | workflow_execution | Identifies the Workflow and the run of the Workflow Execution. | | workflow_type | The name/type of Workflow that has started execution. | | header | Information passed by the sender of the [Signal](/sending-messages#sending-signals) that is copied into the Child Workflow Task. | ### ChildWorkflowExecutionCompleted This [Event](/workflow-execution/event#event) type indicates that the [Child Workflow Execution](/child-workflows) has successfully completed. This would also cause the [WorkflowExecutionCompleted](#workflowexecutioncompleted) to be recorded for the [Workflow](/workflows) that has completed. | Field | Description | | ------------------ | ------------------------------------------------------------------------------------------------------------------------ | | result | Serialized result of the completed Child Workflow. | | namespace | [Namespace](/namespaces) of the completed Child Workflow. | | workflow_execution | Identifies the Workflow and the run of the [Workflow Execution](/workflow-execution). | | workflow_type | The name/type of Workflow that was completed. | | initiated_event_id | Id of the [StartChildWorkflowExecutionInitiated](#startchildworkflowexecutioninitiated) Event this Event corresponds to. | | started_event_id | Id of the [ChildWorkflowExecutionStarted](#childworkflowexecutionstarted) Event this Event corresponds to. | ### ChildWorkflowExecutionFailed This [Event](/workflow-execution/event#event) type indicates that the [Child Workflow Execution](/child-workflows) has unsuccessfully completed. This would also cause the [WorkflowExecutionFailed](#workflowexecutionfailed) to be recorded for the Workflow that has failed. | Field | Description | | ------------------ | ------------------------------------------------------------------------------------------------------------------------ | | failure | Serialized result of a [Workflow](/workflows) failure. | | namespace | [Namespace](/namespaces) of the Child Workflow that failed. | | workflow_execution | Identifies the Workflow and the run of the [Workflow Execution](/workflow-execution). | | workflow_type | The name/type of Workflow that has failed. | | initiated_event_id | Id of the [StartChildWorkflowExecutionInitiated](#startchildworkflowexecutioninitiated) Event this Event corresponds to. | | started_event_id | Id of the [ChildWorkflowExecutionStarted](#childworkflowexecutionstarted) Event this failure corresponds to. | | retry_state | The reason provided for whether the Task should or shouldn't be retried. | ### ChildWorkflowExecutionCanceled This [Event](/workflow-execution/event#event) type indicates that the Child Workflow Execution has been canceled. This would also cause the [WorkflowExecutionCanceled](#workflowexecutioncanceled) to be recorded for the Workflow that was canceled. | Field | Description | | ------------------ | ------------------------------------------------------------------------------------------------------------------------ | | details | Additional information reported by the Child Workflow upon cancelation. | | namespace | [Namespace](/namespaces) of the Child Workflow that was canceled. | | workflow_execution | Identifies the Workflow and the run of the [Workflow Execution](/workflow-execution). | | workflow_type | The name/type of Workflow that was canceled. | | initiated_event_id | Id of the [StartChildWorkflowExecutionInitiated](#startchildworkflowexecutioninitiated) Event this Event corresponds to. | | started_event_id | Id of the [ChildWorkflowExecutionStarted](#childworkflowexecutionstarted) Event this cancelation corresponds to. | ### ChildWorkflowExecutionTimedOut This Event type indicates that the [Child Workflow Execution](/child-workflows) has timed out by the [Temporal Server](/temporal-service/temporal-server). This would also cause the [WorkflowExecutionTimeOut](#workflowexecutiontimedout) to be recorded for the Workflow that timed out. | Field | Description | | ------------------ | ------------------------------------------------------------------------------------------------------------------------ | | namespace | [Namespace](/namespaces) of the Child Workflow. | | workflow_execution | Identifies the Workflow and the run of the Workflow Execution. | | workflow_type | The name/type of Workflow that has timed out. | | initiated_event_id | Id of the [StartChildWorkflowExecutionInitiated](#startchildworkflowexecutioninitiated) Event this Event corresponds to. | | started_event_id | Id of the [ChildWorkflowExecutionStarted](#childworkflowexecutionstarted) Event that this timeout corresponds to. | | retry_state | The reason provided for whether the Task should or shouldn't be retried. | ### ChildWorkflowExecutionTerminated This [Event](/workflow-execution/event#event) type indicates that the Child Workflow Execution has been terminated. This would also cause the [WorkflowExecutionTerminated](#workflowexecutionterminated) to be recorded for the Workflow that was terminated. | Field | Description | | ------------------ | ------------------------------------------------------------------------------------------------------------------------ | | namespace | [Namespace](/namespaces) of the Child Workflow. | | workflow_execution | Identifies the Workflow and the run of the Workflow Execution. | | workflow_type | The name/type of Workflow that was terminated. | | initiated_event_id | Id of the [StartChildWorkflowExecutionInitiated](#startchildworkflowexecutioninitiated) Event this Event corresponds to. | | started_event_id | Id of the [ChildWorkflowExecutionStarted](#childworkflowexecutionstarted) Event that this termination corresponds to. | | retry_state | The reason provided for whether the Task should or shouldn't be retried. | ### SignalExternalWorkflowExecutionInitiated This [Event](/workflow-execution/event#event) type indicates that the [Temporal Server](/temporal-service/temporal-server) will try to [Signal](/sending-messages#sending-signals) the targeted [Workflow](/workflows). This Event type contains the Signal name, as well as a Signal payload. | Field | Description | | -------------------------------- | ----------------------------------------------------------------------------------------------- | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | | namespace | [Namespace](/namespaces) of the Workflow that's to be signaled. | | workflow_execution | Identifies the Workflow and the run of the [Workflow Execution](/workflow-execution). | | signal_name | The name/type of Signal to be fired. | | input | Information that is deserialized by the SDK to provide arguments to the Workflow Function. | | child_workflow_only | Set to true if this Workflow is a child of the Workflow which issued the cancelation request. | | header | Information to be passed from the Signal to the targeted Workflow. | ### SignalExternalWorkflowExecutionFailed This [Event](/workflow-execution/event#event) type indicates that the [Temporal Server](/temporal-service/temporal-server) cannot Signal the targeted [Workflow](/workflows), usually because the Workflow could not be found. | Field | Description | | -------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | workflow_task_completed_event_id | The Id of the [WorkflowTaskCompleted](#workflowtaskcompleted) that the Event was reported with. | | namespace | [Namespace](/namespaces) of the Workflow that failed to execute. | | workflow_execution | Identifies the Workflow and the run of the [Workflow Execution](/workflow-execution). | | initiated_event_id | Id of the [RequestCancelExternalWorkflowExecutionInitiated](#requestcancelexternalworkflowexecutioninitiated) Event this failure [signal](/sending-messages#sending-signals) corresponds to. | ### UpsertWorkflowSearchAttributes This [Event](/workflow-execution/event#event) type indicates that the Workflow [Search Attributes](/search-attribute) should be updated and synchronized with the visibility store. | Field | Description | | -------------------------------- | ------------------------------------------------------------------------------------------ | | workflow_task_completed_event_id | The [WorkflowTaskCompleted](#workflowtaskcompleted) Event reported the Event with this Id. | | search_attributes | Provides data for setting up a Workflow's [Search Attributes](/search-attribute). | ### WorkflowExecutionUpdateAcceptedEvent This [Event](/workflow-execution/event#event) type indicates that a [Workflow Execution](/workflow-execution) has accepted an [Update](/sending-messages#sending-updates) for execution. The original request input payload is both indicated and stored by this Event, as it generates no Event when initially requesting an Update. | Field | Description | | ------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | protocol_instance_id | The instance of the Update protocol with this Id is executing this Update. | | accepted_request_message_id | The Id of the request message sent by [Temporal Server](/temporal-service/temporal-server) to the [Worker](/workers#worker). | | accepted_request_sequencing_event_id | Execute this Update after the Event with this Id. | | accepted_request | The request input and metadata initially provided by the invoker of the Update and subsequently relayed by Temporal Server to the Worker for acceptance and execution. | ### WorkflowExecutionUpdateCompletedEvent This [Event](/workflow-execution/event#event) type indicates that a [Workflow Execution](/workflow-execution) has executed an [Update](/sending-messages#sending-updates) to completion. | Field | Description | | ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | | meta | The metadata associated with this Update, sourced from the initial request. | | accepted_event_id | The Id of the [WorkflowExecutionUpdateAcceptedEvent](#workflowexecutionupdateacceptedevent) The Platform accepted this Update for execution. | | outcome | The outcome of execution of this Update whether the execution resulted in a success or a failure. | ### NexusOperationScheduled This Event type indicates that a Nexus Operation scheduled by a caller Workflow. The caller's [Nexus Machinery](/glossary#nexus-machinery) will attempt to start the Nexus Operation. This Event type contains Nexus Operation input and the Operation request ID. | Field | Description | | :------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | endpoint | Endpoint name, must exist in the endpoint registry. | | service | Service name. | | operation | Operation name. | | input | Input for the operation. The server converts this into Nexus request content and the appropriate content headers internally when sending the StartOperation request. On the handler side, if it is also backed by Temporal, the content is transformed back to the original Payload stored in this event. | | schedule_to_close_timeout | Schedule-to-close timeout for this operation. Indicates how long the caller is willing to wait for operation completion. Calls are retried internally by the server. | | nexus_header | Header to attach to the Nexus request. Note these headers are not the same as Temporal headers on internal activities and child Workflows, these are transmitted to Nexus operations that may be external and are not traditional payloads. | | workflow_task_completed_event_id | The ID of the [WorkflowTaskCompleted](#workflowtaskcompleted) event that the corresponding ScheduleNexusOperation command was reported with. | | request_id | A unique ID generated by the History Service upon creation of this event. The ID will be transmitted with all Nexus StartOperation requests and is used as an idempotency key. | | endpoint_id | Endpoint ID as resolved in the endpoint registry at the time this event was generated. This is stored on the event and used internally by the server in case the endpoint is renamed from the time the event was originally scheduled. | ### NexusOperationStarted This Event type indicates that a Nexus Operation Execution was started. This Event is added to the caller's Workflow History for Asynchronous Nexus Operations, for example those that are backed by a Workflow. The Event is not added to the caller's Workflow History for Synchronous Nexus Operations, since they transition directly to [NexusOperationCompleted](#nexusoperationcompleted) or another final state such as [NexusOperationFailed](#nexusoperationfailed) when the response is provided synchronously by the Nexus handler. | Field | Description | | :----------------- | :------------------------------------------------------------------------------------------------------------------------------------------------ | | scheduled_event_id | The ID of the [NexusOperationScheduled](#nexusoperationscheduled) event this task corresponds to. | | operation_token | The operation token returned by the Nexus handler in the response to the StartOperation request. This token is used when canceling the operation. | | request_id | The request ID allocated at schedule time. | ### NexusOperationCompleted This Event type indicates that a Nexus Operation has completed successfully. The caller's Workflow History records the result of a successful Nexus Operation with this event for synchronous and asynchronous Nexus Operations. This Event type contains Nexus Operation results. | Field | Description | | :----------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | scheduled_event_id | The ID of the [NexusOperationScheduled](#nexusoperationscheduled) event. Uniquely identifies this operation. | | result | Serialized result of the Nexus operation. The response of the Nexus handler. Delivered either via a completion callback or as a response to a synchronous operation. | | request_id | The request ID allocated at schedule time. | ### NexusOperationFailed This Event type indicates that a Nexus Operation has failed. The caller's Workflow History records a failed Nexus Operation with this event both for synchronous and asynchronous Nexus Operations. For example, when a Nexus Handler responds synchronously with a non-retryable error or when a Workflow that backs an Operation fails, resulting in a [WorkflowExecutionFailed](#workflowexecutionfailed) Event. When an SDK client picks up a Nexus Operation, the Nexus handler asynchronously starts an underlying Workflow, which subsequently results in [WorkflowExecutionFailed](#workflowexecutionfailed). This Event type contains a Nexus Operation failure. | Field | Description | | :----------------- | :------------------------------------------------------------------------------------------------------------ | | scheduled_event_id | The ID of the [NexusOperationScheduled](#nexusoperationscheduled)` event. Uniquely identifies this operation. | | failure | Failure details. A NexusOperationFailureInfo wrapping an ApplicationFailureInfo. | | request_id | The request ID allocated at schedule time. | ### NexusOperationTimedOut This Event type indicates that a Nexus Operation has timed out according to the Temporal Server, due to one of these Nexus Operation timeouts: Schedule-to-Close Timeout. | Field | Description | | :---- | :---- | | scheduled_event_id | The ID of the [NexusOperationScheduled](#nexusoperationscheduled)` event. Uniquely identifies this operation. | | failure | Failure details. A NexusOperationFailureInfo wrapping a CanceledFailureInfo. | | request_id | The request ID allocated at schedule time. | ### NexusOperationCancelRequested This Event type indicates that the Workflow that scheduled a Nexus Operation requested to cancel it. | Field | Description | | :---- | :---- | | scheduled_event_id | The id of the [NexusOperationScheduled](#nexusoperationscheduled)` event this cancel request corresponds to. | | workflow_task_completed_event_id | The [WorkflowTaskCompleted](#workflowtaskcompleted) event that the corresponding RequestCancelNexusOperation command was reported with. | ### NexusOperationCanceled This Event type indicates that a Nexus Operation has resolved as canceled. | Field | Description | | :---- | :---- | | scheduled_event_id | The ID of the [NexusOperationScheduled](#nexusoperationscheduled)` event. Uniquely identifies this operation. | | failure | Cancellation details. | | request_id | The request ID allocated at schedule time. | --- ## Temporal Failures reference A Failure is Temporal's representation of various types of errors that occur in the system. There are different types of Failures, and each has a different type in the SDKs and different information in the protobuf messages (which are used to communicate with the Temporal Service and appear in [Event History](/workflow-execution/event#event-history)). ## Temporal Failure Most SDKs have a base class that the other Failures extend: - TypeScript: [TemporalFailure](https://typescript.temporal.io/api/classes/common.TemporalFailure) - Java: [TemporalFailure](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/failure/TemporalFailure.html) - Python: [FailureError](https://python.temporal.io/temporalio.exceptions.FailureError.html) - PHP: [TemporalFailure](https://php.temporal.io/classes/Temporal-Exception-Failure-TemporalFailure.html) The base [Failure proto message](https://api-docs.temporal.io/#temporal.api.failure.v1.Failure) has these fields: - `string message` - `string stack_trace` - `string source`: The SDK this Failure originated in (for example, `"TypeScriptSDK"`). In some SDKs, this field is used to rehydrate the call stack into an exception object. - `Failure cause`: The `Failure` message of the cause of this Failure (if applicable). - `Payload encoded_attributes`: Contains the encoded `message` and `stack_trace` fields when using a [Failure Converter](/failure-converter). ## Application Failure Workflow, and Activity, and Nexus Operation code use Application Failures to communicate application-specific failures that happen. This is the only type of Temporal Failure created and thrown by user code. - TypeScript: [ApplicationFailure](https://typescript.temporal.io/api/classes/common.ApplicationFailure) - Java: [ApplicationFailure](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/failure/ApplicationFailure.html) - Go: [ApplicationError](https://pkg.go.dev/go.temporal.io/sdk/temporal#ApplicationError) - Python: [ApplicationError](https://python.temporal.io/temporalio.exceptions.ApplicationError.html) - PHP: [ApplicationFailure](https://php.temporal.io/classes/Temporal-Exception-Failure-ApplicationFailure.html) - Proto: [ApplicationFailureInfo](https://api-docs.temporal.io/#temporal.api.failure.v1.ApplicationFailureInfo) and [Failure](https://api-docs.temporal.io/#temporal.api.failure.v1.Failure) ### Errors in Workflows An error in a Workflow can cause either a **Workflow Task Failure** (the Task will be retried) or a **Workflow Execution Failure** (the Workflow is marked as failed). Only Workflow exceptions that are Temporal Failures cause the Workflow Execution to fail; all other exceptions cause the Workflow Task to fail and be retried (in Go, any error returned from the Workflow fails the Workflow Execution, and a panic fails the Workflow Task). Most types of Temporal Failures are raised by the Temporal Service, like a [Cancelled Failure](#cancelled-failure) when the Workflow is Cancelled or an [Activity Failure](#activity-failure) when an Activity fails. In contrast, you can explicitly fail the Workflow Execution by throwing an Application Failure (returning any error in Go) in Workflow Definition code. #### Workflow Task Failures A **Workflow Task Failure** is an unexpected situation failing to process a Workflow Task. This could be triggered by a non-Temporal exception being raised (panicking in Go) in your Workflow code. Any exception that does not extend Temporal's `FailureError` exception is considered a Workflow Task Failure. These types of failures will cause the Workflow Task to be retried until the Workflow Execution Timeout, which is unlimited by default. #### Workflow Execution Failures An `ApplicationError`, an extension of `FailureError`, can be raised in a Workflow to fail the Workflow Execution. Workflow Execution Failures put the Workflow Execution into the "Failed" state and no more attempts will be made in progressing this execution. If you are creating custom exceptions you would need to extend the [`ApplicationError`](https://docs.temporal.io/references/failures#application-failure) class—a child class of [`FailureError`](https://docs.temporal.io/references/failures#temporal-failure). ### Errors in Activities In Activities, you can either throw an Application Failure or another Error to fail the Activity Task. In the latter case, the error is converted to an Application Failure. During conversion, the following Application Failure fields are set: - `type` is set to the error's type name. - `message` is set to the error message. - `non_retryable` is set to false. - `details` are left unset. - `cause` is a Failure converted from the error's `cause` property. - `next_retry_delay` is left unset. - call stack is copied. When an [Activity Execution](/activity-execution) fails, the Application Failure from the last Activity Task is the `cause` field of the [ActivityFailure](#activity-failure). This ActivityFailure is thrown by the Workflow's call to the Activity, and it can be handled in the Workflow Definition. ### Errors in Nexus Operations Nexus Operations can end up in completed, failed, canceled, and timed out states. Under the hood, the Nexus Operation machinery breaks up the lifecycle of an Operation into one or more StartOperation requests and completion callbacks, and automatically retries these requests as long they fail with retryable errors. The Workflow-specified schedule-to-close timeout is enforced by the caller's machinery and is the only way for an Operation to transition to the timed out state. Operations can end up in the other three states either when the operation handler returns a synchronous response or error, or when an asynchronous Operation (for example, one backed by a workflow) eventually reaches a terminal state. A Nexus Operation handler can return either retryable or non-retryable errors to indicate to the caller's Nexus machinery whether to retry a given request. Requests that time out before a response is sent to the caller are automatically retried. By default, errors are considered retryable, unless specified below: - Non retryable Application Failures - Unsuccessful Operation errors that can resolve an operation as either failed or canceled - [Handler errors](https://github.com/nexus-rpc/api/blob/main/SPEC.md#predefined-handler-errors) with the following types: `BAD_REQUEST`, `UNAUTHENTICATED`, `UNAUTHORIZED`, `NOT_FOUND`, and `RESOURCE_EXHAUSTED` #### Nexus Operation Task Failures A Nexus Operation Task Failure is an unexpected situation failing to process a Nexus Operation Task in a handler. This could be triggered by throwing an unknown error in your Nexus handler code. These types of failures will cause the Nexus Operation Task to be retried. #### Nexus Operation Execution Failures A non-retryable Application Failure can be thrown by a Nexus Operation handler to fail the overall Nexus Operation Execution. Nexus Operation Execution Failures put the Nexus Operation Execution into the "Failed" state and no more attempts will be made to complete the Nexus Operation. #### Propagation of Workflow errors Application Errors thrown from a Workflow created by a Nexus NewWorkflowRunOperation handler will be automatically propagated to the caller as a non-retryable error and result in a Nexus Operation Execution Failure. #### Using Failures in a Nexus handler In a Nexus Operation handler, you can throw an Application Failure, a Nexus Error or another Error to fail the individual Nexus Operation Task or fail the overall Nexus Operation Execution. Unknown errors are converted to a retryable Application Failure. During conversion, the following fields are set on the Application Failure: - `non_retryable` is set to false. - `type` is set to the error's type name. - `message` is set to the error message. #### Retryable failures Retryable Nexus Operation Task failures, such as an unknown error, are automatically retried with a built-in Retry Policy. When a Nexus Task fails, the caller Workflow records an event attempt failure on the pending Nexus Operation and sets the following fields: - `state` is set to the new state, for example BackingOff. - `attempt` is set to an incremented count. - `next_attempt_schedule_time` is set when the Nexus Task will be retried. - `last_attempt_failure` is set with the following fields: - `message` is set to the error message. - `failure_info` is set to the Application Failure. For example, an unknown error thrown in a Nexus handler will surface as: ``` temporal workflow describe -w my-workflow-id ... Pending Nexus Operations: 1 Endpoint myendpoint Service my-hello-service Operation echo OperationToken State BackingOff Attempt 6 ScheduleToCloseTimeout 0s NextAttemptScheduleTime 20 seconds from now LastAttemptCompleteTime 11 seconds ago LastAttemptFailure {"message":"unexpected response status: "500 Internal Server Error": internal error","applicationFailureInfo":{}} ``` ### Non-retryable When an Activity or Workflow throws an Application Failure, the Failure's `type` field is matched against a Retry Policy's list of [non-retryable errors](/encyclopedia/retry-policies#non-retryable-errors) to determine whether to retry the Activity or Workflow. Activities and Workflow can also avoid retrying by setting an Application Failure's `non_retryable` flag to `true`. When a Nexus Operation handler throws an Application Failure, it is retried by default using a built-in Retry Policy that cannot be customized. Nexus Operation handlers can avoid retrying by setting an Application Failure's `non_retryable` flag to true. When a non-retryable error is returned from a Nexus handler, the overall Nexus Operation Execution is failed and the error is returned to the caller’s Workflow Execution as a Nexus Operation Failure. ### Setting the Next Retry Delay {#activity-next-retry-delay} By setting the Next Retry Delay for a given Application Failure, you can tell the server to wait that amount of time before trying the Activity or Workflow again. This will override whatever the Retry Policy would have computed for your specific exception. Java: [NextRetryDelay](/develop/java/failure-detection#activity-next-retry-delay) TypeScript: [nextRetryDelay](/develop/typescript/failure-detection#activity-next-retry-delay) PHP: [NextRetryDelay](/develop/php/failure-detection#activity-next-retry-delay) ### Nexus errors {#nexus-errors} #### Default mapping By default, Application Failures thrown from a Nexus Operation handler will be mapped to the following underlying Nexus Failures, based on what `non_retryable` is set to: | `non_retryable` | Nexus error | HTTP status code | | :-------------- | :------------------------- | :------------------------ | | false (default) | HandlerErrorTypeInternal | 500 Internal Server Error | | true | UnsuccessfulOperationError | 424 Failed Dependency | #### Use Nexus Errors directly For improved semantics and mapping to HTTP status codes for external Nexus callers (when supported), we recommend that Nexus Operation handlers throw a Nexus Error directly, which includes the list below with associated retry semantics. For example the Nexus Go SDK provides - `nexus.HandlerError(nexus.HandlerErrorType, msg)` - `nexus.UnsuccessfulOperationError{state, failure}` #### Retryable Nexus errors | Nexus error type | `non_retryable` | | :-------------------------------- | :-------------- | | HandlerErrorTypeResourceExhausted | false | | HandlerErrorTypeInternal | false | | HandlerErrorTypeNotImplemented | false | | HandlerErrorTypeUnavailable | false | #### Non-retryable Nexus errors | Nexus error type | `non_retryable` | | :------------------------------ | :-------------- | | HandlerErrorTypeBadRequest | true | | HandlerErrorTypeUnauthenticated | true | | HandlerErrorTypeUnauthorized | true | | HandlerErrorTypeNotFound | true | | UnsuccessfulOperationError | true | ## Cancelled Failure When [Cancellation](/activity-execution#cancellation) of a Workflow, Activity or Nexus Operation is requested, SDKs represent the cancellation to the user in language-specific ways. For example, in TypeScript, in some cases a Cancelled Failure is thrown directly by a Workflow API function, and in other cases the Cancelled Failure is wrapped in a different Failure. To check both types of cases, TypeScript has the [isCancellation](https://typescript.temporal.io/api/namespaces/workflow#iscancellation) helper. When a Workflow, Activity or Nexus Operation is successfully Cancelled, a Cancelled Failure is the `cause` field of the Activity Failure, Nexus Operation Failure or "Workflow failed" error. - TypeScript: [CancelledFailure](https://typescript.temporal.io/api/classes/common.CancelledFailure) - Java: [CanceledFailure](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/failure/CanceledFailure.html) - Go: [CanceledError](https://pkg.go.dev/go.temporal.io/sdk/temporal#CanceledError) - Python: [CancelledError](https://python.temporal.io/temporalio.exceptions.CancelledError.html) - PHP: [CanceledFailure](https://php.temporal.io/classes/Temporal-Exception-Failure-CanceledFailure.html) - Proto: [CanceledFailureInfo](https://api-docs.temporal.io/#temporal.api.failure.v1.CanceledFailureInfo) and [Failure](https://api-docs.temporal.io/#temporal.api.failure.v1.Failure) ## Activity Failure An Activity Failure is delivered to the Workflow Execution when an Activity fails. It contains information about the failure and the Activity Execution; for example, the Activity Type and Activity Id. The reason for the failure is in the `cause` field. For example, if an Activity Execution times out, the `cause` is a [Timeout Failure](#timeout-failure). - TypeScript: [ActivityFailure](https://typescript.temporal.io/api/classes/common.ActivityFailure) - Java: [ActivityFailure](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/failure/ActivityFailure.html) - Go: [ActivityError](https://pkg.go.dev/go.temporal.io/sdk/temporal#ActivityError) - Python: [ActivityError](https://python.temporal.io/temporalio.exceptions.ActivityError.html) - PHP: [ActivityFailure](https://php.temporal.io/classes/Temporal-Exception-Failure-ActivityFailure.html) - Proto: [ActivityFailureInfo](https://api-docs.temporal.io/#temporal.api.failure.v1.ActivityFailureInfo) and [Failure](https://api-docs.temporal.io/#temporal.api.failure.v1.Failure) ## Nexus Operation Failure A Nexus Operation Failure is delivered to the Workflow Execution when a Nexus Operation fails. It contains information about the failure and the Nexus Operation Execution; for example, the Nexus Operation name and Nexus Operation token. The reason for the failure is in the message and cause (typically an Application Error or a Canceled Error). - Go: NexusOperationError - Proto: NexusOperationFailureInfo A Nexus Operation Failure includes the following fields: - Endpoint is set to the name of the endpoint. - Service is set to the name of the service. - Operation is set to the name of the operation. - Operation_token is set if this is an async operation, which can be used to perform additional actions like cancelling the operation. - Scheduled_event_id is set to the caller’s event id that scheduled the operation. - Message is set to a generic unsuccessful error message. - Cause is set to the underlying Application Failure with the following fields: - Non-retryable is set to true. - Type is set to the error's type name. - Message is set to the error message. - Nexus_error_code is set to the underlying Nexus error code. ## Child Workflow Failure A Child Workflow Failure is delivered to the Workflow Execution when a Child Workflow Execution fails. It contains information about the failure and the Child Workflow Execution; for example, the Workflow Type and Workflow Id. The reason for the failure is in the `cause` field. - TypeScript: [ChildWorkflowFailure](https://typescript.temporal.io/api/classes/common.ChildWorkflowFailure) - Java: [ChildWorkflowFailure](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/failure/ChildWorkflowFailure.html) - Go: [ChildWorkflowExecutionError](https://pkg.go.dev/go.temporal.io/sdk/temporal#ChildWorkflowExecutionError) - Python: [ChildWorkflowError](https://python.temporal.io/temporalio.exceptions.ChildWorkflowError.html) - PHP: [ChildWorkflowFailure](https://php.temporal.io/classes/Temporal-Exception-Failure-ChildWorkflowFailure.html) - Proto: [ChildWorkflowExecutionFailureInfo](https://api-docs.temporal.io/#temporal.api.failure.v1.ChildWorkflowExecutionFailureInfo) and [Failure](https://api-docs.temporal.io/#temporal.api.failure.v1.Failure) ## Timeout Failure A Timeout Failure represents the timeout of an Activity or Workflow. When an Activity times out, the last Heartbeat details it emitted is attached. - TypeScript: [TimeoutFailure](https://typescript.temporal.io/api/classes/common.TimeoutFailure) - Java: [TimeoutFailure](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/failure/TimeoutFailure.html) - Go: [TimeoutError](https://pkg.go.dev/go.temporal.io/sdk/temporal#TimeoutError) - Python: [TimeoutError](https://python.temporal.io/temporalio.exceptions.TimeoutError.html) - PHP: [TimeoutFailure](https://php.temporal.io/classes/Temporal-Exception-Failure-TimeoutFailure.html) - Proto: [TimeoutFailureInfo](https://api-docs.temporal.io/#temporal.api.failure.v1.TimeoutFailureInfo) and [Failure](https://api-docs.temporal.io/#temporal.api.failure.v1.Failure) ## Terminated Failure A Terminated Failure is used as the `cause` of an error when a Workflow is terminated, and you receive the error in one of the following locations: - Inside a Workflow that's waiting for the result of a Child Workflow. - When waiting for the result of a Workflow on the Client. In the SDKs: - TypeScript: [TerminatedFailure](https://typescript.temporal.io/api/classes/common.TerminatedFailure) - Java: [TerminatedFailure](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/failure/TerminatedFailure.html) - Go: [TerminatedError](https://pkg.go.dev/go.temporal.io/sdk/temporal#TerminatedError) - Python: [TerminatedError](https://python.temporal.io/temporalio.exceptions.TerminatedError.html) - PHP: [TerminatedFailure](https://php.temporal.io/classes/Temporal-Exception-Failure-TerminatedFailure.html) - Proto: [TerminatedFailureInfo](https://api-docs.temporal.io/#temporal.api.failure.v1.TerminatedFailureInfo) and [Failure](https://api-docs.temporal.io/#temporal.api.failure.v1.Failure) ## Server Failure A Server Failure is used for errors that originate in the Temporal Service. - TypeScript: [ServerFailure](https://typescript.temporal.io/api/classes/common.ServerFailure) - Java: [ServerFailure](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/io/temporal/failure/ServerFailure.html) - Go: [ServerError](https://pkg.go.dev/go.temporal.io/sdk/temporal#ServerError) - Python: [ServerError](https://python.temporal.io/temporalio.exceptions.ServerError.html) - PHP: [ServerFailure](https://php.temporal.io/classes/Temporal-Exception-Failure-ServerFailure.html) - Proto: [ServerFailureInfo](https://api-docs.temporal.io/#temporal.api.failure.v1.ServerFailureInfo) and [Failure](https://api-docs.temporal.io/#temporal.api.failure.v1.Failure) --- ## Temporal Platform references - [SDK metrics reference](/references/sdk-metrics) - [Commands reference](/references/commands) - [Events reference](/references/events) - [Web UI environment variables reference](/references/web-ui-environment-variables) - [Temporal Service configuration reference](/references/configuration) - [Temporal Web UI configuration reference](/references/web-ui-configuration) - [Temporal Cloud Operation reference](/references/operation-list) - [Go SDK API reference](https://pkg.go.dev/go.temporal.io/sdk) - [Java SDK API reference](https://www.javadoc.io/doc/io.temporal/temporal-sdk/latest/index.html) - [Python SDK API reference](https://python.temporal.io/) - [TypeScript SDK API reference](https://typescript.temporal.io) - [.NET SDK API reference](https://dotnet.temporal.io/api/) - [PHP SDK API reference](https://php.temporal.io/namespaces/temporal.html) - [Glossary](/glossary) --- ## Operations Temporal Cloud [rate limits operations per second (OPS)](/cloud/limits#operations-per-second) per namespace. An operation is anything 1. a user does directly, or 2. Temporal does on behalf of the user in the background that results in load on Temporal Server. The exception is visibility queries: they do hit the Server (the query is passed from the server to the visibility store), but primarily the load is on the visibility system. Visibility rate limits are separate from OPS rate limits. Below is the list of operations, including: - operation name - description - priority (foreground is higher priority, background is lower priority) - effect of that operation being throttled --- ## Temporal SDK metrics reference :::info SDK metrics The information on this page is relevant to [Temporal SDKs](/encyclopedia/temporal-sdks). See [Cloud metrics](/cloud/metrics/) for metrics emitted by [Temporal Cloud](/cloud/overview). See [Cluster metrics](/references/cluster-metrics) for metrics emitted by the [OSS Cluster](/temporal-service). Some SDKs may emit metrics beyond what is listed in this SDK Metrics reference. Only metrics included in this Metrics reference have guaranteed, defined behavior. Other metrics are considered deprecated, inconsistent or experimental. ::: The Temporal SDKs emit a set of metrics from Temporal Client usage and Worker Processes. - [How to emit metrics using the Go SDK](/develop/go/observability#metrics) - [How to emit metrics using the Java SDK](/develop/java/observability#metrics) - [How to emit metrics using the Python SDK](/develop/python/observability#metrics) - [How to emit metrics using the TypeScript SDK](/develop/typescript/observability#metrics) - [How to emit metrics using the .NET SDK](/develop/dotnet/observability#metrics) - [How to emit metrics using the Ruby SDK](/develop/ruby/observability#metrics) - [How to tune Worker performance based on metrics](/develop/worker-performance) All metrics are prefixed with `temporal_` before being exported to their configured destination. (The prefix has been removed in parts of this reference.) Currently, some metrics are specific to certain SDKs. TypeScript, Python, .NET, and Ruby SDK metrics are defined in the Core SDK. PHP and Go metrics are defined in the Go SDK. Java metrics are defined in the Java SDK. Metrics are defined in the following locations. - [Core SDK Worker metrics](https://github.com/temporalio/sdk-core/blob/master/crates/sdk-core/src/telemetry/metrics.rs) - [Core SDK Client metrics](https://github.com/temporalio/sdk-core/blob/master/crates/client/src/metrics.rs) - [Java SDK Worker metrics](https://github.com/temporalio/sdk-java/blob/master/temporal-sdk/src/main/java/io/temporal/worker/MetricsType.java) - [Java SDK Client metrics](https://github.com/temporalio/sdk-java/blob/master/temporal-serviceclient/src/main/java/io/temporal/serviceclient/MetricsType.java) - [Go SDK Worker and Client metrics](https://github.com/temporalio/sdk-go/blob/c32b04729cc7691f80c16f80eed7f323ee5ce24f/internal/common/metrics/constants.go) :::note Metric units across SDKs The unit of measurement for metrics can vary based on which SDK they are being reported from: **Core-based SDKs:** Metrics of the type Histogram are measured in _milliseconds_ by default. This can be customized to use seconds for SDKs using [Core SDK](/glossary#core-sdk). The Core SDK is a shared common core library used by several Temporal SDKs, including TypeScript, Python, and .NET. **Java and Go SDKs:** Metrics of the type Histogram are measured in _seconds_. ::: Each metric may have some combination of the following keys attached to them: - `task-queue`: Task Queue that the Worker Entity is polling - `namespace`: Namespace the Worker is bound to - `poller_type`: One of the following: - `workflow_task` - `activity_task` - `nexus_task` (Go and Java only) - `sticky_workflow_task` - `worker_type`: One of the following: - `ActivityWorker` - `WorkflowWorker` - `LocalActivityWorker` (Go and Java only) - `NexusWorker` (Go and Java only) - `activity_type`: The name of the Activity Function the metric is associated with - `workflow_type`: The name of the Workflow Function the metric is associated with - `operation`: RPC method name; available for metrics related to Temporal Client gRPC requests Some keys may not be available in every SDK, and Histogram metrics may have different buckets in each SDK. | Metric name | Emitted by | Metric type | Availability | | ------------------------------------------------------------------------------------------------ | -------------- | ----------- | -------------- | | [temporal_activity_execution_cancelled](#activity_execution_cancelled) | Worker | Counter | Java | | [temporal_activity_execution_failed](#activity_execution_failed) | Worker | Counter | Core, Go, Java | | [temporal_activity_execution_latency](#activity_execution_latency) | Worker | Histogram | Core, Go, Java | | [temporal_activity_poll_no_task](#activity_poll_no_task) | Worker | Counter | Core, Go, Java | | [temporal_activity_schedule_to_start_latency](#activity_schedule_to_start_latency) | Worker | Histogram | Core, Go, Java | | [temporal_activity_succeed_endtoend_latency](#activity_succeed_endtoend_latency) | Worker | Histogram | Core, Go, Java | | [temporal_activity_task_error](#activity_task_error) | Worker | Counter | Go | | [temporal_corrupted_signals](#corrupted_signals) | Worker | Counter | Go, Java | | [temporal_local_activity_execution_cancelled](#local_activity_execution_cancelled) | Worker | Counter | Core, Go, Java | | [temporal_local_activity_execution_failed](#local_activity_execution_failed) | Worker | Counter | Core, Go, Java | | [temporal_local_activity_execution_latency](#local_activity_execution_latency) | Worker | Histogram | Core, Go, Java | | [temporal_local_activity_succeeded_endtoend_latency](#local_activity_succeeded_endtoend_latency) | Worker | Histogram | Core, Go, Java | | [temporal_local_activity_total](#local_activity_total) | Worker | Counter | Core, Go, Java | | [temporal_long_request](#long_request) | Service Client | Counter | Core, Go, Java | | [temporal_long_request_failure](#long_request_failure) | Service Client | Counter | Core, Go, Java | | [temporal_long_request_latency](#long_request_latency) | Service Client | Histogram | Core, Go, Java | | [temporal_nexus_poll_no_task](#nexus_poll_no_task) | Worker | Counter | Core, Go, Java | | [temporal_nexus_task_schedule_to_start_latency](#nexus_task_schedule_to_start_latency) | Worker | Histogram | Core, Go, Java | | [temporal_nexus_task_execution_failed](#nexus_task_execution_failed) | Worker | Counter | Core, Go, Java | | [temporal_nexus_task_execution_latency](#nexus_task_execution_latency) | Worker | Histogram | Core, Go, Java | | [temporal_nexus_task_endtoend_latency](#nexus_task_endtoend_latency) | Worker | Histogram | Core, Go, Java | | [temporal_num_pollers](#num_pollers) | Worker | Gauge | Core, Go | | [temporal_poller_start](#poller_start) | Worker | Counter | Go, Java | | [temporal_request](#request) | Service Client | Counter | Core, Go, Java | | [temporal_request_failure](#request_failure) | Service Client | Counter | Core, Go, Java | | [temporal_request_latency](#request_latency) | Service Client | Histogram | Core, Go, Java | | [temporal_resource_slots_mem_usage](#resource_slots_cpu_usage) | Worker | Gauge | Core, Java | | [temporal_resource_slots_cpu_usage](#resource_slots_mem_usage) | Worker | Gauge | Core, Java | | [temporal_sticky_cache_hit](#sticky_cache_hit) | Worker | Counter | Core, Go, Java | | [temporal_sticky_cache_miss](#sticky_cache_miss) | Worker | Counter | Core, Go, Java | | [temporal_sticky_cache_size](#sticky_cache_size) | Worker | Gauge | Core, Go, Java | | [temporal_sticky_cache_total_forced_eviction](#sticky_cache_total_forced_eviction) | Worker | Counter | Go, Java | | [temporal_unregistered_activity_invocation](#unregistered_activity_invocation) | Worker | Counter | Go | | [temporal_worker_start](#worker_start) | Worker | Counter | Core, Go, Java | | [temporal_worker_task_slots_available](#worker_task_slots_available) | Worker | Gauge | Core, Go, Java | | [temporal_worker_task_slots_used](#worker_task_slots_used) | Worker | Gauge | Core, Go, Java | | [temporal_workflow_active_thread_count](#workflow_active_thread_count) | Worker | Gauge | Java | | [temporal_workflow_cancelled](#workflow_cancelled) | Worker | Counter | Core, Go, Java | | [temporal_workflow_completed](#workflow_completed) | Worker | Counter | Core, Go, Java | | [temporal_workflow_continue_as_new](#workflow_continue_as_new) | Worker | Counter | Core, Go, Java | | [temporal_workflow_endtoend_latency](#workflow_endtoend_latency) | Worker | Histogram | Core, Go, Java | | [temporal_workflow_failed](#workflow_failed) | Worker | Counter | Core, Go, Java | | [temporal_workflow_task_execution_failed](#workflow_task_execution_failed) | Worker | Counter | Core, Go, Java | | [temporal_workflow_task_execution_latency](#workflow_task_execution_latency) | Worker | Histogram | Core, Go, Java | | [temporal_workflow_task_queue_poll_empty](#workflow_task_queue_poll_empty) | Worker | Counter | Core, Go, Java | | [temporal_workflow_task_queue_poll_succeed](#workflow_task_queue_poll_succeed) | Worker | Counter | Core, Go, Java | | [temporal_workflow_task_replay_latency](#workflow_task_replay_latency) | Worker | Histogram | Core, Go, Java | | [temporal_workflow_task_schedule_to_start_latency](#workflow_task_schedule_to_start_latency) | Worker | Histogram | Core, Go, Java | ### activity_execution_cancelled An Activity Execution was canceled. - Type: Counter - Available in: Java - Tags: `activity_type`, `namespace`, `task_queue` ### activity_execution_failed An Activity Execution failed. This does not include local Activity Failures in the Go and Java SDKs (see [local_activity_execution_failed](#local_activity_execution_failed)). - Type: Counter - Available in: Core, Go, Java - Tags: `activity_type`, `namespace`, `task_queue` ### activity_execution_latency Time to complete an Activity Execution, from the time the Activity Task is generated to the time the language SDK responded with a completion (failure or success). - Type: Histogram - Available in: Core, Go, Java - Tags: `activity_type`, `namespace`, `task_queue` ### activity_poll_no_task An Activity Worker poll for an Activity Task timed out, and no Activity Task is available to pick from the Task Queue. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `task_queue` ### activity_schedule_to_start_latency The Schedule-To-Start time of an Activity Task in seconds. A [Schedule-To-Start Timeout](/encyclopedia/detecting-activity-failures#schedule-to-start-timeout) can be set when an Activity Execution is spawned. This metric is useful for ensuring Activity Tasks are being processed from the queue in a timely manner. Some SDKs may include the `activity_type` label, but the metric should not vary by type, as it does not influence the rate at which tasks are pulled from the queue. - Type: Histogram - Available in: Core, Go, Java - Tags: `namespace`, `task_queue` ### activity_succeed_endtoend_latency Total latency of successfully finished Activity Executions from the time they are scheduled to the time they are completed. This metric is not recorded for async Activity completion. - Type: Histogram - Available in: Core, Go, Java - Tags: `activity_type`, `namespace`, `task_queue` ### activity_task_error An internal error or panic occurred during Activity Task handling or execution. - Type: Counter - Available in: Go, - Tags: `activity_type`, `namespace`, `task_queue`, `workflow_type` ### corrupted_signals Number of Signals whose payload could not be deserialized. - Type: Counter - Available in: Go, Java - Tags: `namespace`, `task_queue`, `workflow_type` ### local_activity_execution_cancelled A Local Activity Execution was canceled. - Type: Counter - Available in: Core, Go, Java - Tags: `activity_type`, `namespace`, `task_queue` ### local_activity_execution_failed A Local Activity Execution failed. - Type: Counter - Available in: Core, Go, Java - Tags: `activity_type`, `namespace`, `task_queue` ### local_activity_execution_latency Time to complete a Local Activity Execution, from the time the first Activity Task is generated to the time the SDK responds that the execution is complete. - Type: Histogram - Available in: Core, Go, Java - Tags: `activity_type`, `namespace`, `task_queue` ### local_activity_succeeded_endtoend_latency Total latency of successfully finished Local Activity Executions (from schedule to completion). - Type: Histogram - Available in: Core, Go, Java - Tags: `activity_type`, `namespace`, `task_queue` ### local_activity_total Total number of [Local Activity Executions](/local-activity). - Type: Counter - Available in: Core, Go, Java - Tags: `activity_type`, `namespace`, `task_queue` ### long_request Temporal Client made an RPC long poll request. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `operation` ### long_request_failure Temporal Client made an RPC long poll request that failed. This number is included into the total `long_request` counter for long poll RPC requests. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `operation` ### long_request_latency Latency of a Temporal Client gRPC long poll request. - Type: Histogram - Available in: Core, Go, Java - Tags: `namespace`, `operation` ### nexus_poll_no_task A Nexus Worker poll for a Nexus Task timed out, and no Nexus Task is available to pick from the Task Queue. - Type: Counter - Available in: Go, Java - Tags: `namespace`, `task_queue` ### nexus_task_schedule_to_start_latency The Schedule-To-Start time of a Nexus Task in seconds. The schedule time is taken from when the corresponding request hit the Frontend service to when the SDK started processing the task. This time is limited by the `Request-Timeout` header given to the Frontend when handling this request. - Type: Histogram - Available in: Core, Go, Java - Tags: `namespace`, `task_queue` ### nexus_task_execution_failed Handling of a Nexus Task resulted in an error. This includes any error returned from a user handler and unexpected internal errors in the SDK. - Type: Counter - Available in: Go, Java - Tags: `namespace`, `task_queue`, `nexus_service`, `nexus_operation`, `failure_reason` Valid values for the `failure_reason` tag: - `internal_sdk_error`: There was an unexpected internal error within the SDK while handling the Nexus task. Indicates a bug in the SDK. - `handler_error_{TYPE}`: The user handler code returned a predefined error, as specified in the [Nexus spec](https://github.com/nexus-rpc/api/blob/main/SPEC.md#predefined-handler-errors). If the handler returns an unexpected error, the TYPE is set to `INTERNAL`. - `timeout`: The user handler code did not return within the request timeout. - `operation_failed`: The user handler code has indicated that the operation has failed. In Go, this maps to an `UnsuccessfulOperationError` with a `failed` state. - `operation_canceled`: The user handler code has indicated that the operation has completed as canceled. In Go, this maps to an `UnsuccessfulOperationError` with a `canceled` state. ### nexus_task_execution_latency Time to complete a Nexus Task, from the time the Nexus Task processing starts in the SDK to the time the user handler completes. - Type: Histogram - Available in: Go, Java - Tags: `namespace`, `task_queue`, `nexus_service`, `nexus_operation` ### nexus_task_endtoend_latency Total latency of Nexus Tasks from the time the corresponding request hit the Frontend to after the SDK gets acknowledgment from the server for task completion. - Type: Histogram - Available in: Go, Java - Tags: `namespace`, `task_queue`, `nexus_service`, `nexus_operation` ### num_pollers Current number of Worker Entities that are polling. - Type: Gauge - Available in: Core, Go, Java - Tags: `namespace`, `poller_type`, `task_queue` ### poller_start A Worker Entity poller was started. - Type: Counter - Available in: Go, Java - Tags: `namespace`, `task_queue` ### request Temporal Client made an RPC request. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `operation` ### request_failure Temporal Client made an RPC request that failed. This number is included into the total `request` counter for RPC requests. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `operation` ### request_latency Latency of a Temporal Client gRPC request. - Type: Histogram - Available in: Core, Go, Java - Tags: `namespace`, `operation` ### resource_slots_cpu_usage CPU usage as a value between 0 and 100. As percieved by the resource-based slots tuner, if enabled. - Type: Gauge - Available in: Core, Java ### resource_slots_mem_usage Memory usage as a value between 0 and 100. As percieved by the resource-based slots tuner, if enabled. - Type: Gauge - Available in: Core, Java ### sticky_cache_hit A Workflow Task found a cached Workflow Execution to run against. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `task_queue` ### sticky_cache_miss A Workflow Task did not find a cached Workflow execution to run against. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `task_queue` ### sticky_cache_size Current cache size, expressed in number of Workflow Executions. - Type: Gauge - Available in: Core, Go, Java - Tags: `namespace` (TypeScript, Java), `task_queue` (TypeScript) ### sticky_cache_total_forced_eviction A Workflow Execution has been forced from the cache intentionally. - Type: Counter - Available in: Go, Java - Tags: `namespace`, `task_queue` ### unregistered_activity_invocation A request to spawn an Activity Execution is not registered with the Worker. - Type: Counter - Available in: Go, - Tags: `activity_type`, `namespace`, `task_queue`, `workflow_type` ### worker_start A Worker Entity has been registered, created, or started. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `task_queue`, `worker_type` ### worker_task_slots_available The total number of Workflow, Activity, Local Activity, or Nexus Task execution slots that are currently available. Use the `worker_type` key to differentiate execution slots. The Worker type specifies an ability to perform certain tasks. For example, Workflow Workers execute Workflow Tasks, Activity Workers execute Activity Tasks, and so forth. - Type: Gauge - Available in: Core, Go, Java - Tags: `namespace`, `task_queue`, `worker_type` ### worker_task_slots_used The total number of Workflow, Activity, Local Activity, or Nexus Tasks execution slots in current use. Use the `worker_type` key to differentiate execution slots. The Worker type specifies an ability to perform certain tasks. For example, Workflow Workers execute Workflow Tasks, Activity Workers execute Activity Tasks, and so forth. - Type: Gauge - Available in: Core, Go, Java - Tags: `namespace`, `task_queue`, `worker_type` ### workflow_active_thread_count Total amount of Workflow threads in the Worker Process. - Type: Gauge - Available in: Java ### workflow_cancelled Workflow Execution ended because of a cancellation request. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `task_queue`, `workflow_type` ### workflow_completed A Workflow Execution completed successfully. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `task_queue`, `workflow_type` ### workflow_continue_as_new A Workflow ended with Continue-As-New. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `task_queue`, `workflow_type` ### workflow_endtoend_latency Total Workflow Execution time from schedule to completion for a single Workflow Run. (A retried Workflow Execution is a separate Run.) - Type: Histogram - Available in: Core, Go, Java - Tags: `namespace`, `task_queue`, `workflow_type` ### workflow_failed A Workflow Execution failed. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `task_queue`, `workflow_type` ### workflow_task_execution_failed A Workflow Task Execution failed. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `task_queue`, `workflow_type`, `failure_reason` Valid values for the `failure_reason` tag: - `NonDeterminismError`: The Workflow Task failed due to a non-determinim error. - `WorkflowError`: The Workflow Task failed for any other reason. ### workflow_task_execution_latency Workflow Task Execution time. - Type: Histogram - Available in: Core, Go, Java - Tags: `namespace`, `task_queue`, `workflow_type` ### workflow_task_queue_poll_empty A Workflow Worker polled a Task Queue and timed out without picking up a Workflow Task. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `task_queue` ### workflow_task_queue_poll_succeed A Workflow Worker polled a Task Queue and successfully picked up a Workflow Task. - Type: Counter - Available in: Core, Go, Java - Tags: `namespace`, `task_queue` ### workflow_task_replay_latency Time to catch up on replaying a Workflow Task. - Type: Histogram - Available in: Core, Go, Java - Tags: `namespace`, `task_queue`, `workflow_type` ### workflow_task_schedule_to_start_latency The Schedule-To-Start time of a Workflow Task. - Type: Histogram - Available in: Core, Go, Java - Tags: `namespace`, `task_queue` --- ## Temporal Server options reference You can run the [Temporal Server](/temporal-service/temporal-server) as a Go application by including the server package `go.temporal.io/server/temporal` and using it to create and start a Temporal Server. The Temporal Server services can be run in various ways. We recommend this approach for a limited number of situations. ```go s, err := temporal.NewServer() if err != nil { log.Fatal(err) } err = s.Start() if err != nil{ log.Fatal(err) } ``` `NewServer()` accepts functions as parameters. Each function returns a `ServerOption` that is applied to the instance. Source code for parameter reference is here: https://github.com/temporalio/temporal/blob/main/temporal/server_option.go ### WithConfig To launch a Temporal server, a configuration file is required. The server automatically searches for this configuration in the default location ./config/development.yaml when starting. If you need to use a custom configuration, you can specify it through the server's configuration option. For comprehensive details about configuration parameters and structure, refer to the [official configuration documentation](https://pkg.go.dev/go.temporal.io/server/common/config). ```go s, err := temporal.NewServer( temporal.WithConfig(cfg), ) ``` ### WithConfigLoader Load a custom configuration from a file. ```go s, err := temporal.NewServer( temporal.WithConfigLoader(configDir, env, zone), ) ``` ### ForServices Sets the list of all valid temporal services. The default can be used from the `go.temporal.io/server/temporal` package. ```go s, err := temporal.NewServer( temporal.ForServices(temporal.Services), ) ``` ### InterruptOn This option provides a channel that interrupts the server on the signal from that channel. - If `temporal.InterruptOn()` is not passed, `server.Start()` is never blocked and you need to call `server.Stop()` somewhere. - If `temporal.InterruptOn(nil)` is passed, `server.Start()` blocks forever until the process is killed. - If `temporal.InterruptOn(temporal.InterruptCh())` is passed, `server.Start()` blocks until you use Ctrl+C, which then gracefully shuts the server down. - If `temporal.Interrupt(someCustomChan)` is passed, `server.Start()` blocks until a signal is sent to `someCustomChan`. ```go s, err := temporal.NewServer( temporal.InterruptOn(temporal.InterruptCh()), ) ``` ### WithAuthorizer Sets a low level [authorization mechanism](/self-hosted-guide/security#authorizer-plugin) that determines whether to allow or deny inbound API calls. ```go s, err := temporal.NewServer( temporal.WithAuthorizer(myAuthorizer), ) ``` ### WithTLSConfigFactory Overrides the default TLS configuration provider. `TLSConfigProvider` is defined in the `go.temporal.io/server/common/rpc/encryption` package. ```go s, err := temporal.NewServer( temporal.WithTLSConfigFactory(yourTLSConfigProvider), ) ``` ### WithClaimMapper Configures a [mechanism to map roles](/self-hosted-guide/security#claim-mapper) to `Claims` for authorization. ```go s, err := temporal.NewServer( temporal.WithClaimMapper(func(cfg *config.Config) authorization.ClaimMapper { logger := getYourLogger() // Replace with how you retrieve or initialize your logger return authorization.NewDefaultJWTClaimMapper( authorization.NewDefaultTokenKeyProvider(cfg, logger), cfg ) }), ) ``` ### WithCustomMetricsReporter Sets a custom tally metric reporter. ```go s, err := temporal.NewServer( temporal.WithCustomMetricsReporter(myReporter), ) ``` You can see the [Uber tally docs on custom reporter](https://github.com/uber-go/tally#report-your-metrics) and see a community implementation of [a reporter for Datadog's `dogstatsd` format](https://github.com/temporalio/temporal/pull/998#issuecomment-857884983). --- ## tctl v1.17 activity command reference :::info tctl is deprecated The tctl command line utility has been deprecated and is no longer actively supported. We recommend transitioning to [Temporal CLI](/cli) for continued use and access to new features. Thank you for being a valued part of the Temporal community. ::: The `tctl activity` commands enable [Activity Execution](/activity-execution) operations. - [tctl activity complete](#complete) - [tctl activity fail](#fail) ## complete The `tctl activity complete` command completes an [Activity Execution](/activity-execution). `tctl activity complete ` The following modifiers control the behavior of the command. ### --workflow_id Specify the [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)of an [Activity Execution](/activity-execution) to complete. Alias: `-w` **Example** ```bash tctl activity complete --workflow_id ``` ### --run_id Specify the [Run Id](/workflow-execution/workflowid-runid#run-id) of an [Activity Execution](/activity-execution) to complete. Alias: `-r` **Example** ```bash tctl activity complete --run_id ``` ### --activity_id Specify the [Activity Id](/activity-execution#activity-id) of an [Activity Execution](/activity-execution) to complete. **Example** ```bash tctl activity complete --activity_id ``` ### --result Specify the result of an [Activity Execution](/activity-execution) when using tctl to complete the Activity Execution. **Example** ```bash tctl activity complete --result ``` ### --identity Specify the identity of the operator when using tctl to complete an [Activity Execution](/activity-execution). **Example** ```bash tctl activity complete --identity ``` ## fail The `tctl activity fail` command fails an [Activity Execution](/activity-execution). `tctl activity fail []` The following modifiers control the behavior of the command. ### --workflow_id Specify the [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)of an [Activity Execution](/activity-execution) to fail. Alias: `-w` **Example** ```bash tctl activity fail --workflow_id ``` ### --run_id Specify the [Run Id](/workflow-execution/workflowid-runid#run-id) of an [Activity Execution](/activity-execution) to fail. Alias: `-r` **Example** ```bash tctl activity fail --run_id ``` ### --activity_id Specify the [Activity Id](/activity-execution#activity-id) of an [Activity Execution](/activity-execution) to fail. **Example** ```bash tctl activity fail --activity_id ``` ### --reason Specify the reason for failing an [Activity Execution](/activity-execution). **Example** ```bash tctl activity fail --reason ``` ### --detail Specify details of the reason for failing an [Activity Execution](/activity-execution). **Example** ```bash tctl activity fail --detail ``` ### --identity Specify the identity of the operator when using tctl to fail an [Activity Execution](/activity-execution). **Example** ```bash tctl activity complete --identity ``` --- ## tctl v1.17 admin command reference :::info tctl is deprecated The tctl command line utility has been deprecated and is no longer actively supported. We recommend transitioning to [Temporal CLI](/cli) for continued use and access to new features. Thank you for being a valued part of the Temporal community. ::: A `tctl admin` command allows the user to run admin operations. Modifiers: #### --help `tctl admin [--help | -h]` ## cluster The `tctl admin cluster` command runs the administrator-level operations on a given Cluster. `tctl admin cluster command [command modifiers] [arguments...]` - [add_search_attributes](#add_search_attributes) - [remove_search_attributes](#remove_search_attributes) - [get_search_attributes](#get_search_attributes) - [describe](#describe) - [list](#list) - [upsert_remote_cluster](#upsert_remote_cluster) - [remove_remote_cluster](#upsert_remote_cluster) ### add_search_attributes The `tctl admin cluster add-search-attributes` command allows Search Attributes to be added to a Cluster. Custom Search Attributes can be used to make a Cluster more identifiable. :::note Due to Elasticsearch limitations, you can only add new custom Search Attributes. Existing Search Attributes cannot be renamed or removed from the Elasticsearch index. ::: Use this command to add custom Search Attributes to your Temporal Cluster: ```bash tctl admin cluster add-search-attributes --name --type ``` :::note If you are adding custom Search Attributes to a Cluster running from the `docker-compose-es.yml` file in the [temporalio/docker-compose](https://github.com/temporalio/docker-compose) repo, make sure to increase the Docker memory to more than 6 GB. ::: #### --skip_schema_update Allows the user to skip the Elasticsearch index schema update. :::note This will only register in metadata. ::: #### --name The name of the Search Attribute to add. Names can have multiple values. Search Attribute names are case sensitive. #### --type The type of Search Attribute to add. Multiple values can be added at once. Values: Text, Keyword, Int, Double, Bool, Datetime ### describe The `tctl admin cluster describe` command provides information for the current Cluster. The following modifier changes the behavior of the command: #### --cluster_value The name of the remote Cluster within the current Cluster. This modifier is optional, and can default to the return of current Cluster information. ### get_search_attributes The `tctl admin cluster get_search_attributes` command retrieves existing Search Attributes for a given Cluster. The following modifier will change the behavior of the command: #### --print_json Prints the existing search attributes in JSON format. ### list The `tctl admin cluster list` command lists Cluster information on the given Cluster. Default: 100 The modifier below changes the behavior of the command: #### --pagesize The size of the page that the list is printed on. ### remove_remote_cluster The `tctl admin cluster remove_remote_cluster` command removes remote Cluster information on the given Cluster. The modifier below changes the behavior of the operation: #### --cluster The name of the remote Cluster to remove. ### remove_search_attributes > The Temporal tctl documentation covers version 1.17 of the Temporal CLI. The `tctl admin cluster remove-search-attributes` command removes custom Search Attribute metadata from a Cluster. This operation has no effect on Elasticsearch index schema. Use the following command to remove a [Search Attribute](/search-attribute) from a Cluster's metadata: ```bash tctl admin cluster remove-search-attributes --name ``` Only custom Search Attributes can be removed from a Cluster's metadata. Default Search Attributes cannot be removed. Removing a Search Attribute removes it from the Cluster's metadata but does not remove it from the Elasticsearch index. This means that the Search Attribute can be added back later as the same type. After a Search Attribute has been added to the Elasticsearch index, it cannot be changed. The following modifier changes the behavior of the operation: #### --name Name of the Search Attribute to remove. ### upsert_remote_cluster The `tctl admin cluster upsert_remote_cluster` command adds or updates remote Cluster information in the current Cluster. #### --frontend_address The remote Cluster frontend address. #### --enable_connection Enables remote Cluster connection. ## db The `tctl admin db` command runs administrator-level operations on a given database. ### Usage `tctl admin db command [command modifiers] [arguments...]` ### Commands - [tctl admin db scan](#scan) - [tctl admin db clean](#clean) ### clean The `tctl admin db clean` command cleans corrupted [Workflow Executions](/workflow-execution) from the targeted database. The modifiers below change the behavior of the command. #### --db_engine Type of DB engine to use Default: `cassandra` Value: `cassandra` | `mysql` | `postgres` #### --db_address Persistence address for the database. Default: 127.0.0.1 #### --db_port Persistence port for the DB. Default: 9042 #### --username Database username. #### --password Database password. #### --keyspace Database keyspace Default: "temporal" #### --input_directory The directory which contains the corrupted [Workflow Execution](/workflow-execution) files from running [`scan`](#scan). #### --lower_shard_bound The minimum amount (inclusive) of corrupt shards to handle. Default: 0 #### --upper_shard_bound The maximum amount (exclusive) of corrupt shards to handle. Default: 16384 #### --starting_rps starting rps of database queries. Default: 100 #### --rps Target rps of database queries. Default: 7000 #### --concurrency Number of threads to handle a scan. Default: 1000 #### --report_rate The number of shards handled between each emittance of progress. Default: 10 :::note Enable `--tls` before using any of the following modifiers. ::: #### --tls_cert_path Where the tls client cert is located. #### --tls_key_path Where the tls key is located. #### --tls_ca_pat Where the tls ca is located. #### --tls_server_name The name of the Db tls server. #### --tls_disable_host_verification Disables verification of the DB tls hostname and server cert. ### scan The `tctl admin db scan` command scans concrete Workflow Executions in a given database, and detects corrupted ones. #### --db_engine Type of DB engine to use Default: `cassandra` Value: `cassandra` | `mysql` | `postgres` #### --db_address Persistence address for the DB. Default: 127.0.0.1 #### --db_port Persistence port for the DB. Default: 9042 #### --username DB username. #### --password DB password. #### --keyspace DB keyspace Default: "temporal" #### --lower_shard_bound value The minimum amount (inclusive) of corrupt shards to handle. Default: 0 #### --upper_shard_bound The maximum amount (exclusive) of corrupt shards to handle. Default: 16384 #### --starting_rps starting rps of database queries. Default: 100 #### --rps value Target rps of database queries. Default: 7000 #### --pagesize The size of the page used to query database executions. Default: 500 #### --concurrency Number of threads to handle a scan. Default: 1000 #### --report_rate The number of shards handled between each emittance of progress. Default: 10 #### --tls Enable TLS over the DB connection. :::note Enable `--tls` before using any of the following modifiers. ::: #### --tls_cert_path Where the tls client cert is located. #### --tls_key_path Where the tls key is located. #### --tls_ca_path Where the tls ca is located. #### --tls_server_name The name of the Db tls server. #### --tls_disable_host_verification Disables verification of the DB tls hostname and server cert. ## decode The `tctl admin decode` command allows the user to decode payloads sent and received from executed Activities. `tctl admin decode command [command modifiers] [arguments...]` - [proto](#proto) - [base64](#base64) ### base64 The `tctl admin decode base64` command decodes base64 Payloads. #### --base64_data Decoded data in base64 format. #### --base64_file Creates a file with data in base64 format. ### proto The `tctl admin decode proto` command decodes the Payload to proto format. #### --type The full name of the proto type to decode the Payload to. #### --hex_data Decodes the data to hex format. #### --hex_file Creates a file with the decoded hex data. #### --binary_file Creates a file with the decoded binary data. ## dlq The `tctl admin dlq` commands run admin operations on a given dead-letter queue (DLQ). `tctl admin dlq command [command modifiers] [arguments...]` - [tctl admin dlq read](#read) - [tctl admin dlq purge](#purge) - [tctl admin dlq merge](#merge) ### merge The `tctl admib dlq merge` command allows dead-letter queue (DLQ) messages to be merged. The messages must have TaskIds with an equal or lesser value than the given TaskId. #### --dlq_type The type of DLQ to manage. Options: namespace, history #### --cluster Source cluster for the DLQ. #### --shard_id ShardId provided for the command. #### --last_message_id Identifies the last read message. Default: 0 ### purge The `tctl admin dlq purge` command deletes DLQ messages that have a Task Id equal to or less than the provided Task Id. #### --dlq_type The type of DLQ to manage. Options: namespace, history #### --cluster Source cluster for the DLQ. #### --shard_id ShardId provided for the command. #### --last_message_id Identifies the last read message. Default: 0 ### read The `tctl admin dlq read` command reads out messages from the dead-letter queue (DLQ). --- #### --dlq_type The type of DLQ to manage. Options: namespace, history #### --cluster Source cluster for the DLQ. #### --shard_id ShardId provided for the command. #### --max_message_count The maximum number of messages to fethc. Default: 0 #### --last_message_id Identifies the last read message. Default: 0 #### --output_filename Provides a file to write output to. Output is written to stdout on default. ## history_host The `tctl admin history_host` command runs an admin-level operation on the history host. ## Usage `tctl admin history_host command [command options] [arguments...]` ## Commands - [tctl admin history_host describe](#describe) - [tctl admin history_host get_shardid](#get_shardid) ### describe The `tctl admin history_host describe` command describes the internal information of history host. The following modifiers change the behavior of the command. #### --workflow_id Alias: `-w` The WorkflowId of the Workflow whose history host is to be described. #### --history_address The history address of the history host. #### --shard_id The Id of the shard that belongs to the history host. #### --print_full Print a full and detailed summary of the history host. ### get_shardid The `tctl admin history_host get_shardid` command gets the `shardId` for a given `namespaceId` and `workflowId`. The following modifiers change the behavior of this command. #### --namespace_id The `namespaceId` of the history host where we're getting the `shardId`. #### --workflow_id Alias: `-w` The WorkflowId of the history host where we're getting the shardId. #### --number_of_shards The total amount of shards for the Temporal Cluster. Default: 0 ## membership The `tctl admin membership` command allows admin operations to be run on membership items. ### Usage `tctl admin membership command [command modifiers] [arguments...]` ### Commands - [list_gossip](#list_gossip) - [list_db](#list_db) ### list_db The `tctl admin membership list_db` command lists the Cluster items in a targeted membership. The following modifiers change the behavior of the command. #### --heartbeated_within Filters the list by last Heartbeat time. {/* todo: add supported format list */} #### --role Filters the results by membership role. Default: all Values: all, frontend, history, matching, worker ### list_gossip The `tctl admin membership list_gossip` command lists the ringpop membership items present on the targeted membership. The following modifier changes the behavior of the command: #### --role value Filters the results by membership role Default: all Values: all, frontend, history, matching, worker ## shard The `tctl admin shard` commands enable admin-level operations on a specified shard. #### tctl admin shard commands - [describe](#describe) - [describe_task](#describe_task) - [list_tasks](#list_tasks) - [close_shard](#close_shard) - [remove_task](#remove_task) ### close_shard The `tctl admin shard close_shard` command closes a shard with an Id that corresponds to the value given in the command. `tctl admin shard close_shard [command options] [arguments...]` The modifier below will change the behavior and output of the command. #### --shard_id value ShardId managed by the Temporal Cluster. ### describe_task The `tctl admin shard describe_task` command describes a specified Task's Task Id, Task type, shard Id, and task visibility timestamp. The modifiers below control the output and behavior of the command. Enter all modifiers after the command as such: `tctl admin shard describe_task ` #### --db_engine The type of database (DB) engine for the shard to use. Default: "cassandra" Values: "cassandra", "mysql", "postgres" {/* todo: examples */} #### --db_address Persistence address for the database. Default: 127.0.0.1 #### --db_port Persistence port for the database. Default: 9042 #### --username Username entered into the database. #### --password Password entered into the database. #### --keyspace Keyspace for the database. default: "temporal" #### --tls Enables TLS over the database connection. #### --tls_cert_path DB tls client cert path. Note: tls must be enabled #### --tls_server_name DB tls server name Note: tls must be enabled #### --tls_disable_host_verification DB tls verify hostname and server cert Note: tls must be enabled #### --shard_id Identifies the specified shard. Default: 0 #### --task_id Describes the task. Default: 0 #### --task_type The kind of Task that is targeted within a shard. Default: transfer Values: transfer, timer, replication #### --task_timestamp Task visibility timestamp in nanoseconds Default: 0 #### --target_cluster Temporal cluster for the shard to use. Default: "active" ### describe The `tctl admin shard describe` command shows the Id for the specified shard. The modifier below controls the behavior of the command. #### --shard_id value The Id of the shard to describe Default: 0 ### list_tasks The `tctl admin shard list_tasks` command will list the Tasks available for a given shard Id and Task type. The modifiers below affect the output and behavior of the command. #### --more Lists more pages of list tasks. The default setting is to list one page of 10 list tasks. #### --pagesize value The size of the result page. Default: 10 #### --target_cluster value Temporal cluster to use. Default: "active" #### --shard_id value The ID of the shard Default: 0 #### --task_type value The type of Task. Default: transfer Values: transfer, timer, replication, visibility #### --min_visibility_ts value The minimum value that can be set as a Task Visibility timestamp. Supported formats include: - '2006-01-02T15:04:05+07:00' - Raw UnixNano - Time range (N-duration), where 0 < N < 1000000 and duration (full-notation/short-notation) can be: - second/s - minute/m - week/w - month/m - year/y #### --max_visibility_ts value The maximum value that can be set as a Task Visibility timestamp. Supported formats: - '2006-01-02T15:04:05+07:00' - Raw UnixNano - Time range (N-duration), where 0 < N < 1000000 and duration (full-notation/short-notation) can be: - second/s - minute/m - week/w - month/m - year/y ### remove_task The `tctl admin shard remove_task` command removes a Task from the shard. `tctl admin shard remove_task [command options] [arguments...]` The Task removed must have values that matches what is given in the command line. The modifiers below change the behavior of the command. #### --shard_id value The shardId for the Task to be removed. Default: 0 #### --task_id value The taskId for the Task to be removed. Default: 0 #### --task_type value The type of Task to remove. Default: transfer Values: transfer, timer, replication #### --task_timestamp value The task visibility timestamp, given in nanoseconds. Default: 0 ## workflow The `tctl admin workflow` commands enable administrator-level operations on Workflow Executions. `tctl admin workflow command [modifiers] [arguments...]` - [show](#show) - [describe](#describe) - [refresh_tasks](#refresh_tasks) - [delete](#delete) ### delete The `tctl admin workflow delete` command deletes the current [Workflow Execution](/workflow-execution) and the mutableState record. #### --db_engine value The type of database (DB) engine to use. Default: "cassandra" Values: "cassandra", "mysql", "postgres" #### --db_address value Persistence address for the database. Default: 127.0.0.1 #### --db_port value Persistence port for the database. Default: 9042 #### --username value Username entered into the database. #### --password value Password entered into the database. #### --keyspace value Keyspace for the database. default: "temporal" #### --url value URL of the Elasticsearch cluster. Default: "http://127.0.0.1:9200" #### --es-username value Username for the Elasticsearch cluster. #### --es-password value Password for the Elasticsearch cluster. #### --version value The version of the Elasticsearch cluster for the Workflow. Default: v7 Values: v6, v7 #### --index value Elasticsearch index name. #### --workflow_id value Alias: `-w` The Id of the current Workflow. #### --run_id value Alias: `-r` The Id of the current run. #### --skip_errors Skip any errors that occur in the Workflow Execution. #### --tls Enables TLS over the database connection. :::note TLS must be enabled to use the following modifiers. ::: #### --tls_cert_path value DB tls client cert path. Note: tls must be enabled #### --tls_key_path value DB tls client key path Note: tls must be enabled #### --tls_ca_path value DB tls client ca path Note: tls must be enabled #### --tls_server_name value DB tls server name Note: tls must be enabled #### --tls_disable_host_verification DB tls verify hostname and server cert Note: tls must be enabled ## describe The `tctl admin workflow describe` command describes internal information of the current [Workflow Execution](/workflow-execution). #### --workflow_id value Alias: `-w` The Id of the current Workflow. #### --run_id value Alias: `-r` The Id of the current run. ## refresh_tasks The `tctl admin workflow refresh_tasks` command updates all [Tasks](/tasks#task) in a [Workflow](/workflows), provided that the command can fetch new information for Tasks. #### --workflow_id value Alias: `-w` The Id of the current Workflow. #### --run_id value Alias: `-r` The Id of the current run. ## show The `tctl admin workflow show` command displays Workflow history from the database. #### --workflow_id value Alias: `-w` The current Workflow. #### --run_id value Alias: `-r` The current RunId. #### --min_event_id value The minimum Event Id to include in the history. Default: 0 #### --max_event_id value The maximum Event Id to include in the history. Default: 0 #### --min_event_version value The start Event version to be included in the history. Default: 0 #### --max_event_version value The end Event version to be included in the history. Default: 0 #### --output_filename value The file where the output is sent to. --- ## tctl v1.17 batch command reference :::info tctl is deprecated The tctl command line utility has been deprecated and is no longer actively supported. We recommend transitioning to [Temporal CLI](/cli) for continued use and access to new features. Thank you for being a valued part of the Temporal community. ::: **How to run a tctl batch command.** A `tctl batch` command enables you to affect multiple existing [Workflow Executions](/workflow-execution) with a single command. A batch job runs in the background and affects Workflow Executions one at a time. Use [tctl batch start](#start) to start a batch job. :::note `tctl-v1` can run `batch` and `batch-v2` commands. ::: When starting a batch job, you must provide a [List Filter](/list-filter) and the type of batch job that should occur. Batch jobs run in the background and affect Workflow Executions one at a time. The List Filter identifies the set of Workflow Executions to be affected by the batch job. The `tctl batch start` command shows you how many Workflow Executions will be affected by the batch job and asks you to confirm before proceeding. The batch type determines what other parameters you must provide and what is being affected. There are three types of batch jobs: - Signal: Send a Signal to the set of Workflow Executions that the List Filter specifies. - Cancel: Cancel the set of Workflow Executions that the List Filter specifies. - Terminate: Terminate the set of Workflow Executions that the List Filter specifies. A successfully started batch job returns a Job ID. You can use this Job ID in the `tctl batch describe` command, which describes the progress of a specific batch job. You can also use the Job ID to terminate the batch job itself. Terminating a batch job does not roll back the operations already performed by the batch job. ### tctl batch commands - [tctl batch describe](#describe) - [tctl batch list](#list) - [tctl batch start](#start) - [tctl batch terminate](#terminate) ## start The `tctl batch start` command starts a batch job. `tctl batch start --query ` The following modifiers control the behavior of the command. ### `--query` _Required modifier_ Specify the [Workflow Executions](/workflow-execution) that this batch job should operate. The SQL-like query of [Search Attributes](/search-attribute) is the same as used by the `tctl workflow list --query` command. Alias: `-q` **Example** ```bash tctl batch start --query ``` ### `--reason` Specify a reason for running this batch job. **Example** ```bash tctl batch start --query --reason ``` ### `--batch_type` Specify the operation that this batch job performs. The supported operations are `signal`, `cancel`, and `terminate`. **Example** ```bash tctl batch start --query --batch_type ``` ### `--signal_name` Specify the name of a [Signal](/sending-messages#sending-signals). This modifier is required when `--batch_type` is `signal`. **Example** ```bash tctl batch start --query --batch_type signal --signal_name ``` ### `--input` Pass input for the [Signal](/sending-messages#sending-signals). Input must be in JSON format. Alias: `-i` **Example** ```bash tctl batch start --query --input ``` ### `--rps` Specify RPS of processing. The default value is 50. **Example** ```bash tctl batch start --query --rps ``` ### `--yes` Disable the confirmation prompt. Alias: `y` **Example** ```bash tctl batch start --query --yes ``` ## list The `tctl batch list` command lists all batch jobs. `tctl batch list ` :::note `tctl-v1` can run `batch` and `batch-v2` commands. ::: The following modifier controls the behavior of the command. ### --pagesize Specify the maximum number of batch jobs to list on a page. The default value is 30. **Example** ```bash tctl batch list --pagesize ``` ## describe The `tctl batch describe` command describes the progress of a batch job. `tctl batch describe --job_id ` :::note `tctl` can run `batch` and `batch-v2` commands. ::: The following modifier controls the behavior of the command. ### --job_id _Required modifier_ Specify the job ID of a batch job. **Example** ```bash tctl batch describe --job_id ``` ## terminate The `tctl batch terminate` command terminates a batch job. `tctl batch terminate --job_id ` :::note `tctl-v1` can run `batch` and `batch-v2` commands. ::: The following modifiers control the behavior of the command. ### `--job_id` _Required modifier_ Specify the job ID of a batch job. **Example** ```bash tctl batch terminate --job_id ``` ### `--reason` Specify a reason for terminating this batch job. **Example** ```bash tctl batch terminate --job_id --reason ``` --- ## tctl v1.17 cluster command reference :::info tctl is deprecated The tctl command line utility has been deprecated and is no longer actively supported. We recommend transitioning to [Temporal CLI](/cli) for continued use and access to new features. Thank you for being a valued part of the Temporal community. ::: The `tctl cluster` command enables [Temporal Cluster](/temporal-service) operations. - [tctl cluster health](#health) - [tctl cluster get-search-attributes](#get-search-attributes) ## get-search-attributes The `tctl cluster get-search-attributes` command lists all [Search Attributes](/search-attribute) that can be used in the `--query` modifier of the [`tctl workflow list`](/tctl-v1/workflow#list) command and the `--search_attr_key` and `--search_attr_value` modifiers of the [`tctl workflow run`](/tctl-v1/workflow#run) and [`tctl workflow start`](/tctl-v1/workflow#start) commands. **Example:** ```bash tctl cluster get-search-attributes ``` The command has no modifiers. Example output: ```text +-----------------------+----------+ | NAME | TYPE | +-----------------------+----------+ | BinaryChecksums | Keyword | | CloseTime | Int | | CustomBoolField | Bool | | CustomDatetimeField | Datetime | | CustomDoubleField | Double | | CustomIntField | Int | | CustomKeywordField | Keyword | | CustomNamespace | Keyword | | CustomStringField | String | | ExecutionStatus | Int | | ExecutionTime | Int | | Operator | Keyword | | RunId | Keyword | | StartTime | Int | | TaskQueue | Keyword | | TemporalChangeVersion | Keyword | | WorkflowId | Keyword | | WorkflowType | Keyword | +-----------------------+----------+ ``` The admin version of this command displays default and custom Search Attributes separately, and also shows the underlying Elasticsearch index schema and system Workflow status. ## health The `tctl cluster health` command checks the health of the [Frontend Service](/temporal-service/temporal-server#frontend-service). `tctl cluster health` The command has no modifiers. --- ## tctl v1.17 data-converter command reference :::info tctl is deprecated The tctl command line utility has been deprecated and is no longer actively supported. We recommend transitioning to [Temporal CLI](/cli) for continued use and access to new features. Thank you for being a valued part of the Temporal community. ::: The `tctl dataconverter` command enables custom [Data Converter](/dataconversion) operations. - [tctl dataconverter web](#web) ## web The `tctl dataconverter web` command specifies the WebSocket URL of a custom [Data Converter](/dataconversion) to use with Temporal Web. `tctl dataconverter web --web_ui_url ` The following modifiers control the behavior of the command. ### --port Specify a port for the WebSocket URL of a custom [Data Converter](/dataconversion). The default value is 0. **Example** ```bash tctl dataconverter web --web_ui_url --port ``` ### --web_ui_url _Required modifier_ Specify the WebSocket URL of a custom [Data Converter](/dataconversion). **Example** ```bash tctl dataconverter web --web_ui_url ``` --- ## tctl v1.17 command reference :::info tctl is deprecated The tctl command line utility has been deprecated and is no longer actively supported. We recommend transitioning to [Temporal CLI](/cli) for continued use and access to new features. Thank you for being a valued part of the Temporal community. ::: :::note This documentation reflects tctl version 1.17. ::: The Temporal CLI (tctl) is a command-line tool that you can use to interact with a Temporal Cluster. It can perform [Namespace](/namespaces) operations (such as register, update, and describe) and [Workflow](/workflows) operations (such as start Workflow, show Workflow History, and Signal Workflow). - [How to install tctl](#install) - [Environment variables for tctl](#environment-variables) ## tctl commands - [tctl activity](/tctl-v1/activity/) - [tctl admin](/tctl-v1/admin/) - [tctl batch](/tctl-v1/batch/) - [tctl cluster](/tctl-v1/cluster/) - [tctl dataconverter](/tctl-v1/dataconverter/) - [tctl namespace](/tctl-v1/namespace/) - [tctl taskqueue](/tctl-v1/taskqueue/) - [tctl workflow](/tctl-v1/workflow/) ## How to install tctl {#install} > The Temporal tctl documentation covers version 1.17 of the Temporal CLI. You can install [tctl](/tctl-v1) in the following ways. - Install locally by using [Homebrew](https://brew.sh/): `brew install tctl` - Run locally together with Temporal Server in [Docker Compose](https://github.com/temporalio/docker-compose): `docker exec temporal-admin-tools tctl YOUR COMMANDS HERE` - To invoke [tctl](/tctl-v1) as though it is installed locally (such as `tctl namespace describe`), set an alias: `alias tctl="docker exec temporal-admin-tools tctl"` - Run the [temporal-admin-tools](https://hub.docker.com/r/temporalio/admin-tools) Docker image: - On Linux: `docker run --rm -it --entrypoint tctl --network host --env TEMPORAL_CLI_ADDRESS=localhost:7233 temporalio/admin-tools:1.14.0` - On macOS or Windows: `docker run --rm -it --entrypoint tctl --env TEMPORAL_CLI_ADDRESS=host.docker.internal:7233 temporalio/admin-tools:1.14.0` - If your Temporal Server is running on a remote host, change the value of `TEMPORAL_CLI_ADDRESS`. - To simplify command lines, create a `tctl` alias. - Install the latest version of the tctl in your `GOPATH`: `go install github.com/temporalio/tctl/cmd/tctl@latest` **Note:** To use [tctl](/tctl-v1), you must have a Temporal Server running. To see help for [tctl](/tctl-v1) commands, enter the following commands. | Command | Description | | ------------------- | ------------------------------------------------------ | | `tctl -h` | Display help for top-level commands and global options | | `tctl namespace -h` | Display help for [Namespace](/namespaces) operations | | `tctl workflow -h` | Display help for [Workflow](/workflows) operations | | `tctl taskqueue -h` | Display help for [Task Queue](/task-queue) operations | ## Global modifiers You can supply the values for many of these modifiers by setting [environment variables](#environment-variables) instead of including the modifiers in a tctl command. ### --address Specify a host and port for the Frontend Service. The default is `127.0.0.1:7233`. ### --auto_confirm Automatically confirm all prompts. ### --context_timeout Specify a timeout for the context of an RPC call in seconds. The default value is 5. ### --data_converter_plugin Specify the name of the executable for a custom Data Converter plugin. ### --headers_provider_plugin Specify the name of the executable for a headers provider plugin. ### --help Display help for tctl in the CLI. Alias: `-h` ### --namespace Specify a Namespace. By using this modifier, you don't need to specify a `--namespace` modifier for a sub-command. The default Namespace is `default`. Alias: `--n` ### --tls_ca_path Specify the path to a server Certificate Authority (CA) certificate file. ### --tls_cert_path Specify the path to a public X.509 certificate file for mutual TLS authentication. If you use this modifier, you must also use the `--tls_key_path` modifier. ### --tls_disable_host_verification Disable verification of the server certificate (and thus host verification). ### --tls_key_path Specify the path to a private key file for mutual TLS authentication. If you use this modifier, you must also use the `--tls_cert_path` modifier. ### --tls_server_name Specify an override for the name of the target server that is used for TLS host verification. The name must be one of the DNS names listed in the server TLS certificate. Specifying this modifier also enables host verification. ### --version Display the version of tctl in the CLI. ### --codec_endpoint The URL and port number for a Codec Server. ## Environment variables Setting environment variables for repeated parameters can shorten tctl commands. ### TEMPORAL_CLI_ADDRESS Specify a host and port for the Frontend Service. The default is `127.0.0.1:7233`. ### TEMPORAL_CLI_AUTHORIZATION_TOKEN Specify a token to be used by the HTTP Basic Authorization plugin. {/* TODO: Add link to "Securing tctl" page or its equivalent when it exists. */} ### TEMPORAL_CLI_AUTH Specify the authorization header to be set for a gRPC request. ### TEMPORAL_CLI_NAMESPACE Specify a Namespace. By setting this variable, you don't need to specify a `--namespace` modifier in a tctl command. The default Namespace is `default`. ### TEMPORAL_CLI_PLUGIN_DATA_CONVERTER Specify the name of the executable for a custom Data Converter plugin. ### TEMPORAL_CLI_PLUGIN_HEADERS_PROVIDER Specify the name of the executable for a headers provider plugin. ### TEMPORAL_CLI_TLS_CA Specify the path to a server Certificate Authority (CA) certificate file. ### TEMPORAL_CLI_TLS_CERT Specify the path to a public X.509 certificate file for mutual TLS authentication. ### TEMPORAL_CLI_TLS_DISABLE_HOST_VERIFICATION Set to disable verification of the server certificate (and thus host verification). ### TEMPORAL_CLI_TLS_KEY Specify the path to a private key file for mutual TLS authentication. If you set this variable, you must also set the `TEMPORAL_CLI_TLS_CERT` variable. ### TEMPORAL_CLI_TLS_SERVER_NAME Specify an override for the name of the target server that is used for TLS host verification. The name must be one of the DNS names listed in the server TLS certificate. Setting this variable also enables host verification. ### TEMPORAL_CONTEXT_TIMEOUT Specify a timeout for the context of an RPC call in seconds. The default value is 5. --- ## tctl v1.17 namespace command reference :::info tctl is deprecated The tctl command line utility has been deprecated and is no longer actively supported. We recommend transitioning to [Temporal CLI](/cli) for continued use and access to new features. Thank you for being a valued part of the Temporal community. ::: The `tctl namespace` commands enable [Namespace](/namespaces) operations. Alias: `n` - [tctl namespace describe](#describe) - [tctl namespace list](#list) - [tctl namespace register](#register) - [tctl namespace update](#update) ## describe The `tctl namespace describe` command describes a [Namespace](/namespaces). `tctl namespace describe` The following modifier controls the behavior of the command. ### --namespace_id Specify the ID of a Namespace to describe. This modifier is required unless the global `--namespace` modifier is specified (`tctl --namespace describe`). **Example** ```bash tctl namespace describe --namespace_id ``` Example results for a [Global Namespace](/global-namespace) ```bash $ tctl --ns canary-namespace n desc Name: canary-namespace Description: testing namespace OwnerEmail: dev@yourtech.io NamespaceData: Status: REGISTERED RetentionInDays: 7 EmitMetrics: true ActiveClusterName: dc1 Clusters: dc1, dc2 ``` ## list The `tctl namespace list` command lists all [Namespaces](/namespaces). `tctl namespace list` The command has no modifiers. ## register The `tctl namespace register` command registers a [Namespace](/namespaces). `tctl namespace register` By default, Temporal uses a "default" Namespace. Create and register a new Namespace with the following command: ```bash tctl --namespace your-namespace namespace register --- # OR using short alias tctl --ns your-namespace n re ``` The following modifiers control the behavior of the command. ### --active_cluster Specify the name of the active [Temporal Cluster](/temporal-service) when registering a [Namespace](/namespaces). This value changes for Global Namespaces when a failover occurs. **Example** ```bash tctl namespace register --active_cluster ``` ### --clusters Specify a list of [Temporal Clusters](/temporal-service) when registering a [Namespace](/namespaces). The list contains the names of Clusters (separated by spaces) to which the Namespace can fail over. Make sure to include to the currently active Cluster. This is a read-only setting and cannot be changed. This modifier is valid only when the `--global_namespace` modifier is set to true. **Example** ```bash tctl namespace register --clusters ``` ### --description Specify a description when registering a [Namespace](/namespaces). **Example** ```bash tctl namespace register --description ``` ### --global_namespace Specifies whether a [Namespace](/namespaces) is a [Global Namespace](/global-namespace). When enabled, it controls the creation of replication tasks on updates allowing the state to be replicated across Clusters. This is a read-only setting and cannot be changed. **Example** ```bash tctl namespace register --global_namespace ``` ### --history_archival_state Set the state of [Archival](/temporal-service/archival). Valid values are `disabled` and `enabled`. **Example** ```bash tctl namespace register --history_archival_state ``` ### --history_uri Specify the URI for [Archival](/temporal-service/archival). The URI cannot be changed after Archival is first enabled. **Example** ```bash tctl namespace register --history_uri ``` ### --namespace_data Specify data for a [Namespace](/namespaces) in the form of key-value pairs (such as `k1:v1,k2:v2,k3:v3`). **Example** ```bash tctl namespace register --namespace_data ``` ### --owner_email Specify the email address of the [Namespace](/namespaces) owner. **Example** ```bash tctl namespace register --owner_email ``` ### --retention Set the [Retention Period](/temporal-service/temporal-server#retention-period) for the [Namespace](/namespaces). The Retention Period applies to Closed [Workflow Executions](/workflow-execution). **Example** ```bash tctl namespace register --retention ``` ### --visibility_archival_state Set the visibility state for [Archival](/temporal-service/archival). Valid values are `disabled` and `enabled`. **Example** ```bash tctl namespace register --visibility_archival_state ``` ### --visibility_uri Specify the visibility URI for [Archival](/temporal-service/archival). The URI cannot be changed after Archival is first enabled. **Example** ```bash tctl namespace register --visibility_uri ``` ## update The `tctl namespace update` command updates a [Namespace](/namespaces). `tctl namespace update` The following modifiers control the behavior of the command. ### --active_cluster Specify the name of the active [Temporal Cluster](/temporal-service) when updating a [Namespace](/namespaces). **Example** ```bash tctl namespace update --active_cluster ``` ### --add_bad_binary Add a binary checksum to use when resetting a [Workflow Execution](/workflow-execution). Temporal will not dispatch any [Commands](/workflow-execution#command) to the given binary. See also [`--remove_bad_binary`](#--remove_bad_binary). **Example** ```bash tctl namespace update --add_bad_binary ``` ### --clusters Specify a list of [Temporal Clusters](/temporal-service) when updating a [Namespace](/namespaces). The list contains the names of Clusters (separated by spaces) to which the Namespace can fail over. This modifier is valid only when the `--global_namespace` modifier is set to true. **Example** ```bash tctl namespace update --clusters ``` ### --description Specify a description when updating a [Namespace](/namespaces). **Example** ```bash tctl namespace update --description ``` ### --history_archival_state Set the state of [Archival](/temporal-service/archival). Valid values are `disabled` and `enabled`. **Example** ```bash tctl namespace update --history_archival_state ``` ### --history_uri Specify the URI for URI for [Archival](/temporal-service/archival). The URI cannot be changed after Archival is first enabled. **Example** ```bash tctl namespace update --history_uri ``` ### --namespace_data Specify data for a [Namespace](/namespaces) in the form of key-value pairs (such as `k1:v1,k2:v2,k3:v3`). **Example** ```bash tctl namespace update --namespace_data ``` ### --owner_email Specify the email address of the [Namespace](/namespaces) owner. **Example** ```bash tctl namespace update --owner_email ``` ### --reason Specify a reason for updating a [Namespace](/namespaces). **Example** ```bash tctl namespace update --reason ``` ### --remove_bad_binary Remove a binary checksum. See also [`--add_bad_binary`](#--add_bad_binary). **Example** ```bash tctl namespace update --remove_bad_binary ``` ### --retention Specify the number of days to retain [Workflow Executions](/workflow-execution). **Example** ```bash tctl namespace update --retention ``` ### --visibility_archival_state Set the visibility state for [Archival](/temporal-service/archival). Valid values are `disabled` and `enabled`. **Example** ```bash tctl namespace update --visibility_archival_state ``` ### --visibility_uri Specify the visibility URI for [Archival](/temporal-service/archival). The URI cannot be changed after Archival is first enabled. **Example** ```bash tctl namespace update --visibility_uri ``` --- ## tctl 1.17 schedule command reference :::info tctl is deprecated The tctl command line utility has been deprecated and is no longer actively supported. We recommend transitioning to [Temporal CLI](/cli) for continued use and access to new features. Thank you for being a valued part of the Temporal community. ::: A [Schedule](/schedule) is an experimental feature available in `tctl 1.17` and `tctl next`. - [Backfill a Schedule using tctl](#backfill) - [Create a Schedule using tctl](#create) - [Delete a Schedule using tctl](#delete) - [Describe a Schedule using tctl](#describe) - [List Schedules using tctl](#list) - [Toggle Pause on Schedule using tctl](#toggle) - [Trigger an Action on a Schedule using tctl](#trigger) - [Update a Schedule using tctl](#update) ## backfill Backfilling a Schedule means having it do now what it would have done over a specified time range (generally in the past, although it won't prevent you from giving a time range in the future). You might use this to fill in runs from a time period when the Schedule was paused due to an external condition that's now resolved, or a period before the Schedule was created. ```shell tctl schedule backfill --sid 'your-schedule-id' \ --overlap-policy 'BufferAll' \ --start-time '2022-05-01T00:00:00Z' \ --end-time '2022-05-31T23:59:59Z' ``` Note that, similar to [tctl schedule trigger](#trigger) immediately, you probably want to override the Overlap Policy. Specifying `AllowAll` runs all the backfilled Workflows at once; `BufferAll` runs them sequentially. The other policies don't make much sense in this context. ## create With tctl, create a Schedule like this: ```shell $ tctl config set version next # ensure you're using the new tctl $ tctl schedule create \ --schedule-id 'your-schedule-id' \ --interval '5h/15m' \ --calendar '{"dayOfWeek":"Fri","hour":"11","minute":"3"}' \ --overlap-policy 'BufferAll' \ --workflow-id 'your-workflow-id' \ --task-queue 'your-task-queue' \ --workflow-type 'YourWorkflowType' ``` This Schedule takes action every 5 hours at 15 minutes past the hour and also at 11:03 on Fridays. It starts a Workflow `YourWorkflowType` on Task Queue `your-task-queue`, giving it a Workflow Id like `your-workflow-id-2022-06-17T11:03:00Z`. Workflows do not run in parallel. If they would otherwise overlap, they are buffered to run sequentially. You can also use traditional cron strings, including all features that are supported by `CronSchedule` today, such as `@weekly` and other shorthands, `@every`, and `CRON_TZ`. ```shell $ tctl schedule create \ --schedule-id 'your-schedule-id' \ --cron '3 11 * * Fri' \ --workflow-id 'your-workflow-id' \ --task-queue 'your-task-queue' \ --workflow-type 'YourWorkflowType' ``` Temporal Workflow Schedule Cron strings follow this format: ``` ┌───────────── minute (0 - 59) │ ┌───────────── hour (0 - 23) │ │ ┌───────────── day of the month (1 - 31) │ │ │ ┌───────────── month (1 - 12) │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday) │ │ │ │ │ * * * * * ``` Any combination of `--calendar`, `--interval`, and `--cron` is supported and Actions will happen at any of the specified times. If you use both `--time-zone` and also `CRON_TZ`, they must agree. See `tctl schedule create --help` for the full set of available options. ## delete A Schedule can be deleted. Deleting a Schedule **does not** affect any Workflows started by the Schedule. Workflow Executions started by Schedules can be cancelled or terminated using the same methods as any others. However, Workflow Executions started by a Schedule can be identified by the Search Attributes added to them and can be targeted by a [batch](/tctl-v1/batch/) command for termination. ```shell $ tctl schedule delete --schedule-id 'your-schedule-id' ``` ## describe Display the current Schedule configuration as well as extra information about past, current, and future Runs. ```shell tctl schedule describe --schedule-id 'your-schedule-id' ``` Because the Schedule Spec is converted to canonical representations, the output might not be in the same form as it was input. ## list ```shell tctl schedule list ``` Note that if you're using standard Visibility, listing Schedules will currently only include Schedule Ids and no other information. Because the Schedule Spec is converted to canonical representations, the output might not be in the same form as it was input. ## toggle ```shell $ tctl schedule toggle --schedule-id 'your-schedule-id' --pause --reason "paused because the database is down" $ tctl schedule toggle --schedule-id 'your-schedule-id' --unpause --reason "the database is back up" ``` ## trigger Starting a Workflow Run immediately with a Schedule, regardless of its configured Spec, is a common use case. ```shell $ tctl schedule trigger --schedule-id 'your-schedule-id' ``` Note that the action that it takes is subject to the Overlap Policy of the Schedule by default: if the overlap policy is `Skip` and a Workflow is already running, the triggered Action to start the next Workflow Run is skipped! Likewise, if the overlap policy is `BufferAll`, the triggered run is buffered behind one or more runs. If you really want it to run right now, you can override the overlap policy for this request: ```shell $ tctl schedule trigger --schedule-id 'your-schedule-id' --overlap-policy 'AllowAll' ``` ## update Any part of the Schedule configuration can be updated at any time. `tctl schedule update` takes the same options as `tctl schedule create` and replaces the entire configuration of the schedule with what's provided. This means if you want to change just one value, you have to provide everything else again. --- ## tctl v1.17 taskqueue command reference :::info tctl is deprecated The tctl command line utility has been deprecated and is no longer actively supported. We recommend transitioning to [Temporal CLI](/cli) for continued use and access to new features. Thank you for being a valued part of the Temporal community. ::: The `tctl taskqueue` command enables [Task Queue](/task-queue) operations. Alias: `t` - [tctl taskqueue describe](#describe) - [tctl taskqueue list-partition](#list-partition) ## describe The `tctl taskqueue describe` command describes the poller information of a [Task Queue](/task-queue). `tctl taskqueue describe ` The following modifiers control the behavior of the command. ### --taskqueue _Required modifier_ Specify a [Task Queue](/task-queue). Alias: `--t` **Example** ```bash tctl taskqueue describe --taskqueue ``` ### --taskqueuetype Specify the type of a [Task Queue](/task-queue). The type can be `workflow` or `activity`. The default is `workflow`. **Example** ```bash tctl taskqueue describe --taskqueue --taskqueuetype ``` ## list-partition The `tctl taskqueue list-partition` command lists the partitions of a [Task Queue](/task-queue) and the hostname for the partitions. `tctl taskqueue list-partition --taskqueue ` The following modifier controls the behavior of the command. ### --taskqueue _Required modifier_ Specify a [Task Queue](/task-queue) description. Alias: `--t` **Example** ```bash tctl taskqueue list-partition --taskqueue ``` --- ## tctl v1.17 workflow command reference :::info tctl is deprecated The tctl command line utility has been deprecated and is no longer actively supported. We recommend transitioning to [Temporal CLI](/cli) for continued use and access to new features. Thank you for being a valued part of the Temporal community. ::: The `tctl workflow` commands enable [Workflow Execution](/workflow-execution) operations. - [tctl workflow cancel](#cancel) - [tctl workflow count](#count) - [tctl workflow describe](#describe) - [tctl workflow describeid](#describeid) - [tctl workflow list](#list) - [tctl workflow listall](#listall) - [tctl workflow listarchived](#listarchived) - [tctl workflow observe](#observe) - [tctl workflow observeid](#observeid) - [tctl workflow query](#query) - [tctl workflow reset](#reset) - [tctl workflow reset-batch](#reset-batch) - [tctl workflow run](#run) - [tctl workflow scan](#scan) - [tctl workflow show](#show) - [tctl workflow showid](#showid) - [tctl workflow signal](#signal) - [tctl workflow stack](#stack) - [tctl workflow start](#start) - [tctl workflow terminate](#terminate) ## cancel The `tctl workflow cancel --query` command cancels a [Workflow Execution](/workflow-execution). Canceling a running Workflow Execution records a `WorkflowExecutionCancelRequested` event in the History. A new [Workflow Task](/tasks#workflow-task) will be scheduled. After cancellation, the Workflow Execution can perform cleanup work. See also [`tctl workflow terminate --query`](#terminate). `tctl workflow cancel --query ` The following modifiers control the behavior of the command. ### --workflow_id Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). Alias: `-w` **Example** ```bash tctl workflow cancel --workflow_id ``` ### --run_id Specify a [Run Id](/workflow-execution/workflowid-runid#run-id). Alias: `-r` **Example** ```bash tctl workflow cancel --run_id ``` ## count The `tctl workflow count` command counts [Workflow Executions](/workflow-execution). This command requires Elasticsearch to be enabled. `tctl workflow count ` The following modifier controls the behavior of the command. ### --query _Required modifier_ Specify an SQL-like query of [Search Attributes](/search-attribute). Alias: `-q` **Example** To count all open [Workflow Executions](/workflow-execution): ```bash tctl workflow count --query 'ExecutionStatus="Running"' ``` ## describe The `tctl workflow describe` command shows information about a [Workflow Execution](/workflow-execution). This information can be used to locate a failed Workflow Execution, for example. To find a Workflow with a given Run Id, refer to [`tctl workflow describeid`](#describeid). `tctl workflow describe ` The following modifiers control the behavior of the command. Always include required modifiers when executing this command. ### --workflow_id **This is a required modifier.** Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). Alias: `-w` **Example** ```bash tctl workflow describe --workflow_id ``` ### --run_id Specify a [Run Id](/workflow-execution/workflowid-runid#run-id). If a Run Id is not provided, the command will show the latest Workflow Execution of that Workflow Id. Alias: `-r` **Example** ```bash tctl workflow describe --run_id ``` ### --print_raw Print properties exactly as they are stored. **Example** ```bash tctl workflow describe --print_raw ``` ### --reset_points_only Show only events that are eligible for reset. If successful, the command returns the Run Id of all deployments, and the times at which the Events were created. **Example** ```bash tctl workflow describe --reset_points_only ``` ## describeid The `tctl workflow describeid` command shows information about a [Workflow Execution](/workflow-execution) for the specified [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)and optional [Run Id](/workflow-execution/workflowid-runid#run-id). `tctl workflow describeid ` This command is a shortcut for `tctl workflow describe --workflow_id --run_id `. The following modifiers control the behavior of the command. ### --print_raw Print properties exactly as they are stored. **Example** ```bash tctl workflow describeid --print_raw ``` ### --reset_points_only Show only events that are eligible for reset. **Example** ```bash tctl workflow describeid --reset_points_only ``` ## list The `tctl workflow list` command lists open or closed [Workflow Executions](/workflow-execution). By default, this command lists a maximum of 10 closed Workflow Executions. - To set the size of a page, use the `--pagesize` option. - To list multiple pages, use the `--more` option. - To list open Workflow Executions, use the `--open` option. See also [`tctl workflow listall`](#listall), [`tctl workflow listarchived`](#listarchived), and [`tctl workflow scan`](#scan). `tctl workflow list ` The following modifiers control the behavior of the command. ### --print_raw_time Print the raw timestamp. **Example** ```bash tctl workflow list --print_raw_time ``` ### --print_datetime Print the timestamp. **Example** ```bash tctl workflow list --print_datetime ``` ### --print_memo Print a memo. **Example** ```bash tctl workflow list --print_memo ``` ### --print_search_attr Print the [Search Attributes](/search-attribute). **Example** ```bash tctl workflow list --print_search_attr ``` ### --print_full Print the full message without table formatting. **Example** ```bash tctl workflow list --print_full ``` ### --print_json Print the raw JSON objects. **Example** ```bash tctl workflow list --print_json ``` ### --open List open [Workflow Executions](/workflow-execution). (By default, the `tctl workflow list` command lists closed Workflow Executions.) **Example** ```bash tctl workflow list --open ``` ### --earliest_time Specify the earliest start time to list. Supported format are as follows: - `--T::<+|->:` - Raw Unix Epoch time (the number of milliseconds since 0000 UTC on January 1, 1970). - `` is a value between 0 and 1000000, and `` is one of the following: - `second` or `s` - `minute` or `m` - `hour` or `h` - `day` or `d` - `week` or `w` - `month` or `M` - `year` or `y` **Examples** To specify 3:04:05 PM India Standard Time on January 2, 2022: ```bash tctl workflow list --earliest-time '2022-01-02T15:04:05+05:30' ``` To specify 15 minutes before the current time: ```bash tctl workflow list --earliest-time '15minute' ``` ### --latest_time Specify the latest start time to list. Supported formats are as follows: - `--T::<+|->:` - Raw Unix Epoch time (the number of milliseconds since 0000 UTC on January 1, 1970). - `` is a value between 0 and 1000000, and `` is one of the following: - `second` or `s` - `minute` or `m` - `hour` or `h` - `day` or `d` - `week` or `w` - `month` or `M` - `year` or `y` **Examples** To specify 11:02:17 PM Pacific Daylight Time on April 13, 2022: ```bash tctl workflow list --latest_time '2022-04-13T23:02:17-07:00' ``` To specify 10s before the current time: ```bash tctl workflow list --latest_time '10second' ``` ### --workflow_id Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). Alias: `-w` **Example** ```bash tctl workflow list --workflow_id ``` ### --workflow_type Specify the name of a [Workflow Type](/workflow-definition#workflow-type). **Example** ```bash tctl workflow list --workflow_type ``` ### --status Specify the status of a [Workflow Execution](/workflow-execution). Supported values are as follows: - `completed` - `failed` - `canceled` - `terminated` - `continuedasnew` - `timedout` **Example** ```bash tctl workflow list --status ``` ### --query **How to list and filter Workflow Executions with a [List Filter](/list-filter) using tctl.** The `--query` flag is supported only when [Advanced Visibility](/visibility#advanced-visibility) is configured with the Cluster. Using the `--query` option causes tctl to ignore all other filter options, including `open`, `earliest_time`, `latest_time`, `workflow_id`, and `workflow_type`. Alias: `-q` **Example** ```bashbash tctl workflow list --query "WorkflowId=" ``` More examples: ```bashbash tctl workflow list \ --query "WorkflowType='main.SampleParentWorkflow' AND ExecutionStatus='Running'" ``` ```bashbash tctl workflow list \ --query '(CustomKeywordField = "keyword1" and CustomIntField >= 5) or CustomKeywordField = "keyword2"' \ --print_search_attr ``` ```bashbash tctl workflow list \ --query 'CustomKeywordField in ("keyword2", "keyword1") and CustomIntField >= 5 and CloseTime between "2018-06-07T16:16:36-08:00" and "2019-06-07T16:46:34-08:00" order by CustomDatetimeField desc' \ --print_search_attr ``` ```bashbash tctl workflow list \ --query 'WorkflowType = "main.Workflow" and (WorkflowId = "1645a588-4772-4dab-b276-5f9db108b3a8" or RunId = "be66519b-5f09-40cd-b2e8-20e4106244dc")' ``` ```bashbash tctl workflow list \ --query 'WorkflowType = "main.Workflow" StartTime > "2019-06-07T16:46:34-08:00" and ExecutionStatus = "Running"' ``` ### --more List more than one page. (By default, the `tctl workflow list` command lists one page of results.) **Example** ```bash tctl workflow list --more ``` ### --pagesize Specify the maximum number of [Workflow Executions](/workflow-execution) to list on a page. (By default, the `tctl workflow list` command lists 10 Workflow Executions per page.) **Example** ```bash tctl workflow list --pagesize ``` ## listall The `tctl workflow listall` command lists all open or closed [Workflow Executions](/workflow-execution). By default, this command lists all closed Workflow Executions. To list open Workflow Executions, use the `--open` option. See also [`tctl workflow list`](#list), [`tctl workflow listarchived`](#listarchived), and [`tctl workflow scan`](#scan). `tctl workflow listall ` The following modifiers control the behavior of the command. ###`--print_raw_time Print the raw timestamp. **Example** ```bash tctl workflow listall --print_raw_time ``` ### --print_datetime Print the timestamp. **Example** ```bash tctl workflow listall --print_datetime ``` ### --print_memo Print a memo. **Example** ```bash tctl workflow listall --print_memo ``` ### --print_search_attr Print the [Search Attributes](/search-attribute). **Example** ```bash tctl workflow listall --print_search_attr ``` ### `--print_full` Print the full message without table formatting. **Example** ```bash tctl workflow listall --print_full ``` ### --print_json Print the raw JSON objects. **Example** ```bash tctl workflow listall --print_json ``` ### --open List open [Workflow Executions](/workflow-execution). (By default, the `tctl workflow listall` command lists closed Workflow Executions.) **Example** ```bash tctl workflow listall --open ``` ### --earliest_time Specify the earliest start time to list. Supported format are as follows: - `--T::<+|->:` - Raw Unix Epoch time (the number of milliseconds since 0000 UTC on January 1, 1970). - `` is a value between 0 and 1000000, and `` is one of the following: - `second` or `s` - `minute` or `m` - `hour` or `h` - `day` or `d` - `week` or `w` - `month` or `M` - `year` or `y` **Examples** To specify 3:04:05 PM India Standard Time on January 2, 2022: ```bash tctl workflow listall --earliest-time '2022-01-02T15:04:05+05:30' ``` To specify 15 minutes before the current time: ```bash tctl workflow listall --earliest-time '15minute' ``` ### --latest_time Specify the latest start time to list. Supported formats are as follows: - `--T::<+|->:` - Raw Unix Epoch time (the number of milliseconds since 0000 UTC on January 1, 1970). - `` is a value between 0 and 1000000, and `` is one of the following: - `second` or `s` - `minute` or `m` - `hour` or `h` - `day` or `d` - `week` or `w` - `month` or `M` - `year` or `y` Alias: `--lt` **Examples** To specify 11:02:17 PM Pacific Daylight Time on April 13, 2022: ```bash tctl workflow listall --latest-time '2022-04-13T23:02:17-07:00' ``` To specify 10 seconds before the current time: ```bash tctl workflow listall --latest-time '10second' ``` ### --workflow_id Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). Alias: `-w` **Example** ```bash tctl workflow listall --workflow_id ``` ### --workflow_type Specify the name of a [Workflow Type](/workflow-definition#workflow-type). **Example** ```bash tctl workflow listall --workflow_type ``` ### --status Specify the status of a [Workflow Execution](/workflow-execution). Supported values are as follows: - `completed` - `failed` - `canceled` - `terminated` - `continuedasnew` - `timedout` **Example** ```bash tctl workflow listall --status ``` ### --query Specify an SQL-like query of [Search Attributes](/search-attribute). Using the `--query` option causes tctl to ignore all other filter options, including `open`, `earliest_time`, `latest_time`, `workflow_id`, and `workflow_type`. Alias: `-q` **Example** ```bash tctl workflow listall --query ``` ## listarchived The `tctl workflow listarchived` command lists archived [Workflow Executions](/workflow-execution). By default, this command lists a maximum of 100 Workflow Executions. - To set the size of a page, use the `--pagesize` option. - To list all pages, use the `--all` option. See also [`tctl workflow list`](#list), [`tctl workflow listall`](#listall), and [`tctl workflow scan`](#scan). `tctl workflow listarchived ` The following modifiers control the behavior of the command. ### --print_raw_time Print the raw timestamp. **Example** ```bash tctl workflow listarchived --print_raw_time ``` ### --print_datetime Print the timestamp. **Example** ```bash tctl workflow listarchived --print_datetime ``` ### --print_memo Print a memo. **Example** ```bash tctl workflow listarchived --print_memo ``` ### --print_search_attr Print the [Search Attributes](/search-attribute). **Example** ```bash tctl workflow listarchived --print_search_attr ``` ### --print_full Print the full message without table formatting. **Example** ```bash tctl workflow listarchived --print_full ``` ### --print_json Print the raw JSON objects. **Example** ```bash tctl workflow listarchived --print_json ``` ### --query Specify an SQL-like query of [Search Attributes](/search-attribute). Consult the documentation of the visibility archiver that is used by your [Namespace](/namespaces) for detailed instructions. Alias: `-q` **Example** ```bash tctl workflow listarchived --query ``` ### --pagesize Specify the maximum number of [Workflow Executions](/workflow-execution) to list on a page. (By default, the `tctl workflow listarchived` command lists 100 Workflow Executions per page.) **Example** ```bash tctl workflow listarchived --pagesize ``` ### --all List all pages. **Example** ```bash tctl workflow listarchived --all ``` ## observe The `tctl workflow observe` command shows the progress of the [Event History](/workflow-execution/event#event-history) of a [Workflow Execution](/workflow-execution). See also [`tctl workflow observeid`](#observeid). `tctl workflow observe ` Alias: `o` The following modifiers control the behavior of the command. ### --workflow_id Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). Alias: `-w` **Example** ```bash tctl workflow observe --workflow_id ``` ### --run_id Specify a [Run Id](/workflow-execution/workflowid-runid#run-id). Alias: `-r` **Example** ```bash tctl workflow observe --run_id ``` ### --show_detail Show event details. **Example** ```bash tctl workflow observe --show_detail ``` ### --max_field_length Specify the maximum length for each attribute field. The default value is 0. **Example** ```bash tctl workflow observe --max_field_length ``` ## observeid The `tctl workflow observeid` command shows the progress of the [Event History](/workflow-execution/event#event-history) of a [Workflow Execution](/workflow-execution) for the specified [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)and optional [Run Id](/workflow-execution/workflowid-runid#run-id). `tctl workflow observeid [] ` This command is a shortcut for `tctl workflow observe --workflow_id [--run_id ]`. The following modifiers control the behavior of the command. ### --show_detail Show event details. **Example** ```bash tctl workflow observeid --show_detail ``` ### --max_field_length Specify the maximum length for each attribute field. The default value is 0. **Example** ```bash tctl workflow observeid --max_field_length ``` ## query Alias: `q` The `tctl workflow query` command sends a [Query](/sending-messages#sending-queries) to a [Workflow Execution](/workflow-execution). Queries can be used to retrieve all or part of the Workflow state with given parameters. ```bash $ tctl workflow query --workflow_id "HelloQuery" --query_type "getCount" Query result as JSON: 3 ``` Queries can also be used on completed Workflows. Let's complete a Workflow by updating its greeting, and then query the now-finished Workflow. ```bash $ tctl workflow signal --workflow_id "HelloQuery" --name "updateGreeting" --input \"Bye\" Signal workflow succeeded. $ tctl workflow query --workflow_id "HelloQuery" --query_type "getCount" Query result as JSON: 4 ``` Queries are written as follows: `tctl workflow query --workflow_id [modifiers]` The following modifiers control the behavior of the command. Always include required modifiers when executing this command. ### --workflow_id Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). **This modifier is required.** Alias: `-w` **Example** ```bash tctl workflow query --workflow_id ``` ### --run_id Specify a [Run Id](/workflow-execution/workflowid-runid#run-id). Alias: `-r` **Example** ```bash tctl workflow query --run_id ``` ### --query_type Specify the type of Query to run. **Example** ```bash tctl workflow query --query_type ``` ### --input Pass input for the Query. Input must be in JSON format. For multiple JSON objects, concatenate them and use spaces as separators. Alias: `-i` **Example** ```bash tctl workflow query --input ``` ### --input_file Pass input for the Query from a JSON file. For multiple JSON objects, concatenate them and use spaces or newline characters as separators. Input from the command line overwrites input from the file. **Example** ```bash tctl workflow query --input_file ``` ### --query_reject_condition Reject Queries based on Workflow state. Valid values are `not_open` and `not_completed_cleanly`. **Example** ```bash tctl workflow query --query_reject_condition ``` ## reset The `tctl workflow reset` command resets a [Workflow Execution](/workflow-execution) by either [`eventId`](#--event_id)or [`resetType`](#--reset_type). Resetting a Workflow allows the process to be resumed from a certain point without losing your parameters or Event History. To run multiple Reset operations at once, see [`tctl workflow reset-batch`](#reset-batch). `tctl workflow reset ` The following modifiers control the behavior of the command. ### --workflow_id Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). Alias: `-w` **Example** ```bash tctl workflow reset --workflow_id ``` ### --run_id Specify a [Run Id](/workflow-execution/workflowid-runid#run-id). Alias: `-r` **Example** ```bash tctl workflow reset --run_id ``` ### --event_id Specify the `eventId` of any event after `WorkflowTaskStarted` to which you want to reset. Valid values are `WorkflowTaskCompleted`, `WorkflowTaskFailed`, and `WorkflowTaskTimeout`. **Example** ```bash tctl workflow reset --event_id ``` ### --reason Specify a reason for resetting the [Workflow Execution](/workflow-execution). **Example** ```bash tctl workflow reset --reason ``` ### --reset_type Specify the event type to which you want to reset. | Value | Description | | -------------------- | ----------------------------------------------------------- | | `FirstWorkflowTask` | Reset to the beginning of the Event History. | | `LastWorkflowTask` | Reset to the end of the Event History. | | `LastContinuedAsNew` | Reset to the end of the Event History for the previous Run. | | `BadBinary` | Reset to the point where a bad binary was used. | **Example** ```bash tctl workflow reset --reset_type ``` ### --reset_reapply_type Specify the types of events to reapply after the reset point. Valid values are `All`, `Signal`, and `None`. The default is `All`. **Example** ```bash tctl workflow reset --reset_reapply_type ``` ### --reset_bad_binary_checksum Specify the binary checksum when using `--reset_type BadBinary`. **Example** ```bash tctl workflow reset --reset_bad_binary_checksum ``` ## reset-batch The `tctl workflow reset-batch` command resets a batch of [Workflow Executions](/workflow-execution) by [`resetType`](#--reset_type). Resetting a Workflow allows the process to be resumed from a certain point without losing your parameters or Event History. To reset individual Workflows, see [`tctl workflow reset`](#reset). `tctl workflow reset-batch ` The following modifiers control the behavior of the command. ### --input_file Provide an input file that specifies [Workflow Execution](/workflow-execution) to reset. Each line contains one [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)as the base Run and, optionally, a [Run Id](/workflow-execution/workflowid-runid#run-id). If a Run Id is not specified, the current Run Id is used. **Example** ```bash tctl workflow reset-batch --input_file ``` ### --query Specify an SQL-like query of [Search Attributes](/search-attribute) describing the [Workflow Executions](/workflow-execution) to reset. Alias: `-q` **Example** ```bash tctl workflow reset-batch --query ``` ### --exclude_file Provide an input file that specifies [Workflow Executions](/workflow-execution) to exclude from resetting. Each line contains one [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). **Example** ```bash tctl workflow reset-batch --exclude_file ``` ### --input_separator Specify the separator for the input file. The default is a tab (`\t`). **Example** ```bash tctl workflow reset-batch --input_separator ``` ### --reason Specify a reason for resetting the [Workflow Executions](/workflow-execution). **Example** ```bash tctl workflow reset-batch --reason ``` ### --input_parallism Specify the number of goroutines to run in parallel. Each goroutine processes one line for every second. The default is 1. **Example** ```bash tctl workflow reset-batch --input_parallism ``` ### --skip_current_open Indicate that a [Workflow Execution](/workflow-execution) should be skipped if the current Run is open for the same [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)as the base Run. **Example** ```bash tctl workflow reset-batch --skip_current_open ``` ### --skip_base_is_not_current Indicate that a [Workflow Execution](/workflow-execution) should be skipped if the base Run is not the current Run. **Example** ```bash tctl workflow reset-batch --skip_base_is_not_current ``` ### --only_non_deterministic Indicate that a [Workflow Execution](/workflow-execution) should be reset only if its last event is `WorkflowTaskFailed` with a nondeterminism error. **Example** ```bash tctl workflow reset-batch --only_non_deterministic ``` ### --dry_run Simulate use of the `tctl workflow reset-batch` command without resetting any [Workflow Executions](/workflow-execution). Output is logged to `stdout`. **Example** ```bash tctl workflow reset-batch --dry_run ``` ### --reset_type Specify the event type to which you want to reset. | Value | Description | | -------------------- | ----------------------------------------------------------- | | `FirstWorkflowTask` | Reset to the beginning of the Event History. | | `LastWorkflowTask` | Reset to the end of the Event History. | | `LastContinuedAsNew` | Reset to the end of the Event History for the previous Run. | | `BadBinary` | Reset to the point where a bad binary was used. | **Example** ```bash tctl workflow reset-batch --reset_type ``` ### --reset_bad_binary_checksum Specify the binary checksum when using `--reset_type BadBinary`. **Example** ```bash tctl workflow reset-batch --reset_bad_binary_checksum ``` ## run The `tctl workflow run` command starts a new [Workflow Execution](/workflow-execution) and can show the progress of a Workflow Execution. The command is entered in the following format: `tctl workflow run ` To run a Workflow, the user must specify the following: - Task queue name (`--taskqueue`) - Workflow Type (`--workflow_type`) ```bash tctl workflow run --taskqueue your-task-queue-name --workflow_type YourWorkflowDefinitionName ``` Single quotes (`''`) are used to wrap input as JSON. This command doesn't finish until the Workflow completes. The following modifiers control the behavior of the command. ### --taskqueue Specify a [Task Queue](/task-queue). Alias: `--t` **Example** ```bash tctl workflow run --taskqueue ``` ### --workflow_id Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). Alias: `-w` **Example** ```bash tctl workflow run --workflow_id ``` ### --workflow_type Specify the name of a [Workflow Type](/workflow-definition#workflow-type). **Example** ```bash tctl workflow run --workflow_type ``` ### --execution_timeout Specify the [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout) of the [Workflow Execution](/workflow-execution) in seconds. The default value is 0. **Example** ```bash tctl workflow run --execution_timeout ``` ### --workflow_task_timeout Specify the [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout) of the [Workflow Task](/tasks#workflow-task) in seconds. The default value is 10. **Example** ```bash tctl workflow run --workflow_task_timeout ``` ### --cron Specify a [Cron Schedule](/cron-job#cron-schedules). **Example** ```bash tctl workflow run --cron ``` ### --workflowidreusepolicy Specify a [Workflow Id Reuse Policy](/workflow-execution/workflowid-runid#workflow-id-reuse-policy). Configure if the same [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)is allowed for use in new [Workflow Execution](/workflow-execution). There are three allowed values: - [AllowDuplicateFailedOnly](/workflow-execution/workflowid-runid#workflow-id-reuse-policy) - [AllowDuplicate](/workflow-execution/workflowid-runid#workflow-id-reuse-policy) - [RejectDuplicate](/workflow-execution/workflowid-runid#workflow-id-reuse-policy) **Examples** ```bash tctl workflow run --workflowidreusepolicy AllowDuplicate tctl workflow run --workflowidreusepolicy AllowDuplicateFailedOnly tctl workflow run --workflowidreusepolicy RejectDuplicate ``` ### --input Pass input for the Workflow. Input must be in JSON format. For multiple JSON objects, pass each in a separate `--input` option. Use `null` for null values. Alias: `-i` **Example** ```bash tctl workflow run --input ``` ### --input_file Pass input for the Workflow from a JSON file. For multiple JSON objects, concatenate them and use spaces or newline characters as separators. Input from the command line overwrites input from the file. **Example** ```bash tctl workflow run --input_file ``` ### --memo_key Pass a key for a memo. For multiple keys, concatenate them and use spaces as separators. **Example** ```bash tctl workflow run --memo_key ``` ### --memo Pass a memo. A memo is information in JSON format that can be shown when the Workflow is listed. For multiple memos, concatenate them and use spaces as separators. The order must match the order of keys in `--memo_key`. **Example** ```bash tctl workflow run --memo ``` ### --memo_file Pass information for a memo from a JSON file. For multiple JSON objects, concatenate them and use spaces or newline characters as separators. The order must match the order of keys in `--memo_key`. **Example** ```bash tctl workflow run --memo_file ``` ### --search_attr_key Specify a [Search Attribute](/search-attribute) key. For multiple keys, concatenate them and use pipes (`|`) as separators. To list valid keys, use the `tctl cluster get-search-attributes` command. **Example** ```bash tctl workflow run --search_attr_key ``` ### --search_attr_value Specify a [Search Attribute](/search-attribute) value. For multiple values, concatenate them and use pipes (`|`) as separators. If a value is an array, use JSON format, such as `["a","b"]`, `[1,2]`, `["true","false"]`, or `["2022-06-07T17:16:34-08:00","2022-06-07T18:16:34-08:00"]`. To list valid keys and value types, use the `tctl cluster get-search-attributes` command. **Example** ```bash tctl workflow run --search_attr_value ``` ### --show_detail Get event details. **Example** ```bash tctl workflow run --show_detail ``` ### --max_field_length Specify the maximum length for each attribute field. The default value is 0. **Example** ```bash tctl workflow run --max_field_length ``` ## scan The `tctl workflow scan` command lists [Workflow Executions](/workflow-execution). It is faster than the `tctl workflow listall` command, but the results are not sorted. By default, this command lists a maximum of 2000 Workflow Executions. To set the size of a page, use the `--pagesize` option. See also [`tctl workflow list`](#list), [`tctl workflow listall`](#listall), and [`tctl workflow listarchived`](#listarchived). `tctl workflow scan ` The following modifiers control the behavior of the command. ### --print_raw_time Print the raw timestamp. **Example** ```bash tctl workflow scan --print_raw_time ``` ### --print_datetime Print the timestamp. **Example** ```bash tctl workflow scan --print_datetime ``` ### --print_memo Print a memo. **Example** ```bash tctl workflow scan --print_memo ``` ### --print_search_attr Print the [Search Attributes](/search-attribute). **Example** ```bash tctl workflow scan --print_search_attr ``` ### --print_full Print the full message without table formatting. **Example** ```bash tctl workflow scan --print_full ``` ### --print_json Print the raw JSON objects. **Example** ```bash tctl workflow scan --print_json ``` ### --pagesize Specify the maximum number of [Workflow Execution](/workflow-execution) to list on a page. (By default, the `tctl workflow scan` command lists 2000 Workflow Executions per page.) **Example** ```bash tctl workflow scan --pagesize ``` ### --query Specify an SQL-like query of [Search Attributes](/search-attribute). Alias: `-q` **Example** ```bash tctl workflow scan --query ``` ## show The `tctl workflow show` command shows the [Event History](/workflow-execution/event#event-history) for the specified [Workflow Execution](/workflow-execution). `tctl workflow show ` See also [`tctl workflow showid`](#showid). The following modifiers control the behavior of the command. ### --workflow_id Show the History of a [Workflow Execution](/workflow-execution) by specifying a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). Alias: `-w` **Example** ```bash tctl workflow show --workflow_id ``` ### --run_id Show the History of a [Workflow Execution](/workflow-execution) by specifying a [Run Id](/workflow-execution/workflowid-runid#run-id). Alias: `-r` **Example** ```bash tctl workflow show --run_id ``` ### --print_datetime Print the timestamp. **Example** ```bash tctl workflow show --print_datetime ``` ### --print_raw_time Print the raw timestamp. **Example** ```bash tctl workflow show --print_raw_time ``` ### --output_filename Serialize an event to a file. **Example** ```bash tctl workflow show --output_filename ``` ### --print_full Print full event details. **Example** ```bash tctl workflow show --print_full ``` ### --print_event_version Print the event version. **Example** ```bash tctl workflow show --print_event_version ``` ### --event_id Print the details of a specified event. The default value is 0. **Example** ```bash tctl workflow show --event_id ``` ### --max_field_length Specify the maximum length for each attribute field. The default value is 500. **Example** ```bash tctl workflow show --max_field_length ``` ### --reset_points_only Show only events that are eligible for reset. **Example** ```bash tctl workflow show --reset_points_only ``` ## showid The `tctl workflow showid` command shows the Workflow Execution Event History for the specified [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)and optional [Run Id](/workflow-execution/workflowid-runid#run-id). `tctl workflow showid [] ` This command is a shortcut for `tctl workflow show --workflow_id [--run_id ]`. Example: ```bashbash tctl workflow showid ``` Example output: ```bashtext 1 WorkflowExecutionStarted {WorkflowType:{Name:HelloWorld}, ParentInitiatedEventId:0, TaskQueue:{Name:HelloWorldTaskQueue, Kind:Normal}, Input:[Temporal], WorkflowExecutionTimeout:1h0m0s, WorkflowRunTimeout:1h0m0s, WorkflowTaskTimeout:10s, Initiator:Unspecified, LastCompletionResult:[], OriginalExecutionRunId:f0c04163-833f-490b-99a9-ee48b6199213, Identity:tctl@z0mb1e, FirstExecutionRunId:f0c04163-833f-490b-99a9-ee48b6199213, Attempt:1, WorkflowExecutionExpirationTime:2020-10-13 21:41:06.349 +0000 UTC, FirstWorkflowTaskBackoff:0s} 2 WorkflowTaskScheduled {TaskQueue:{Name:HelloWorldTaskQueue, Kind:Normal}, StartToCloseTimeout:10s, Attempt:1} 3 WorkflowTaskStarted {ScheduledEventId:2, Identity:15079@z0mb1e, RequestId:731f7b41-5ae4-42e4-9695-ecd857d571f1} 4 WorkflowTaskCompleted {ScheduledEventId:2, StartedEventId:3, Identity:15079@z0mb1e} 5 WorkflowExecutionCompleted {Result:[], WorkflowTaskCompletedEventId:4} ``` The following modifiers control the behavior of the command. ### --print_datetime Print the timestamp. **Example** ```bash tctl workflow showid --print_datetime ``` ### --print_raw_time Print the raw timestamp. **Example** ```bash tctl workflow showid --print_raw_time ``` ### --output_filename Serialize an event to a file. **Example** ```bash tctl workflow showid --output_filename ``` ### --print_full Print full event details. **Example** ```bash tctl workflow showid --print_full ``` ### --print_event_version Print the event version. **Example** ```bash tctl workflow showid --print_event_version ``` ### --event_id Print the details of a specified event. The default value is 0. **Example** ```bash tctl workflow showid --event_id ``` ### --max_field_length Specify the maximum length for each attribute field. The default value is 500. **Example** ```bash tctl workflow showid --max_field_length ``` ### --reset_points_only Show only events that are eligible for reset. **Example** ```bash tctl workflow showid --reset_points_only ``` ## signal The `tctl workflow signal` command [Signals](/sending-messages#sending-signals) a [Workflow Execution](/workflow-execution). Workflows listen for Signals by their Signal name, and can be made to listen to one or more Signal names. Workflows can also listen for SQL queries. The Workflow below listens for instances of "HelloSignal": ```bash tctl workflow start --workflow_id "HelloSignal" --taskqueue HelloWorldTaskQueue --workflow_type HelloWorld --execution_timeout 3600 --input \"World\" ``` The Worker would return this output upon receiving the Signal: ```text 13:57:44.258 [workflow-method] INFO c.t.s.javaquickstart.GettingStarted - 1: Hello World! ``` Signals can also be used to change variable values. ```bash tctl workflow signal --workflow_id "HelloSignal" --name "updateGreeting" --input \"Hi\" ``` The output would change from the first Signal received. ```text 13:57:44.258 [workflow-method] INFO c.t.s.javaquickstart.GettingStarted - 1: Hello World! 13:58:22.352 [workflow-method] INFO c.t.s.javaquickstart.GettingStarted - 2: Hi World! ``` When a Signal is sent, an await condition is made to block any Signals that contain the same input value. However, changing the greeting in our example unblocks it: ```bash tctl workflow signal --workflow_id "HelloSignal" --name "updateGreeting" --input \"Welcome\" ``` Worker output: ```text 13:57:44.258 [workflow-method] INFO c.t.s.javaquickstart.GettingStarted - 1: Hello World! 13:58:22.352 [workflow-method] INFO c.t.s.javaquickstart.GettingStarted - 2: Hi World! 13:59:29.097 [workflow-method] INFO c.t.s.javaquickstart.GettingStarted - 3: Welcome World! ``` Sending Signals does not require a running Worker. ```bash tctl workflow signal --workflow_id "HelloSignal" --name "updateGreeting" --input \"Welcome\" ``` CLI output: ```text Signal workflow succeeded. ``` The Signal request is queued inside the Temporal Server until the Worker is restarted. If the given Signal contains the same input as before, the queued Signal will be ignored. Complete the Workflow by sending a Signal with a "Bye" greeting: ```bash tctl workflow signal --workflow_id "HelloSignal" --name "updateGreeting" --input \"Bye\" ``` Check that the Workflow Execution has been completed. ```bash tctl workflow showid HelloSignal ``` Signals are written as follows: ```bash tctl workflow signal --workflow_id ``` or ```bash tctl workflow signal --query ``` The following modifiers control the behavior of the command. Make sure to include required modifiers in all command executions. ### --workflow_id Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). **This modifier is required.** Alias: `-w` **Example** ```bash tctl workflow signal --workflow_id ``` ### --run_id Specify a [Run Id](/workflow-execution/workflowid-runid#run-id). Alias: `-r` **Example** ```bash tctl workflow signal --run_id ``` ### --name Specify the name of a [Signal](/sending-messages#sending-signals). **Example** ```bash tctl workflow signal --query --name ``` ### --input Pass input for the [Signal](/sending-messages#sending-signals). Input must be in JSON format. Alias: `-i` **Example** ```bash tctl workflow signal --query --input ``` ### --input_file Pass input for the [Signal](/sending-messages#sending-signals) from a JSON file. **Example** ```bash tctl workflow signal --query --input_file ``` ## stack The `tctl workflow stack` command queries [Workflow Execution](/workflow-execution) with `__stack_trace` as the query type. This command can be used to locate errors and blocks in a [Workflow Definition](/workflow-definition). `tctl workflow stack ` The following modifiers control the behavior of the command. ### --workflow_id **This is a required modifier.** Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). Alias: `-w` **Example** ```bash tctl workflow stack --workflow_id ``` ### --run_id Specify a [Run Id](/workflow-execution/workflowid-runid#run-id). Alias: `-r` **Example** ```bash tctl workflow stack --run_id ``` ### --input Pass input for the query. Input must be in JSON format. For multiple JSON objects, concatenate them and use spaces as separators. Alias: `-i` **Example** ```bash tctl workflow stack --input ``` ### --input_file Pass input for the query from a JSON file. For multiple JSON objects, concatenate them and use spaces or newline characters as separators. Input from the command line overwrites input from the file. **Example** ```bash tctl workflow stack --input_file ``` ### --query_reject_condition Reject queries based on Workflow state. Valid values are `not_open` and `not_completed_cleanly`. **Example** ```bash tctl workflow stack --query_reject_condition ``` ## start The `tctl workflow start` command starts a new [Workflow Execution](/workflow-execution). Unlike `run`, this command returns the Workflow Id and Run Id immediately after starting the Workflow. `tctl workflow start ` The following modifiers control the behavior of the command. Always include required modifiers when executing this command. ### --taskqueue Specify a [Task Queue](/task-queue). Alias: `--t` **Example** ```bash tctl workflow start --taskqueue ``` ### --workflow_id **This is a required modifier.** Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). Alias: `-w` **Example** ```bash tctl workflow start --workflow_id ``` If a Workflow is started without providing an Id, the Client generates one in the form of a UUID. Temporal recommends using a business id rather than the client-generated UUID. **Example** ```bash tctl workflow start --workflow_id "HelloTemporal1" --taskqueue HelloWorldTaskQueue --workflow_type HelloWorld --execution_timeout 3600 --input \"Temporal\" ``` ### --workflow_type Specify the name of a [Workflow Type](/workflow-definition#workflow-type). **Example** ```bash tctl workflow start --workflow_type ``` ### --execution_timeout Specify the [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout) of the [Workflow Execution](/workflow-execution) in seconds. The default value is 0. **Example** ```bash tctl workflow start --execution_timeout ``` ### --workflow_task_timeout Specify the [Start-To-Close Timeout](/encyclopedia/detecting-activity-failures#start-to-close-timeout) of the [Workflow Task](/tasks#workflow-task) in seconds. The default value is 10. **Example** ```bash tctl workflow start --workflow_task_timeout ``` ### --cron Specify a [Cron Schedule](/cron-job#cron-schedules). **Example** ```bash tctl workflow start --cron ``` ### --workflowidreusepolicy Specify a [Workflow Id Reuse Policy](/workflow-execution/workflowid-runid#workflow-id-reuse-policy). Configure if the same [Workflow Id](/workflow-execution/workflowid-runid#workflow-id)is allowed for use in new [Workflow Execution](/workflow-execution). There are three allowed values: - [AllowDuplicateFailedOnly](/workflow-execution/workflowid-runid#workflow-id-reuse-policy) - [AllowDuplicate](/workflow-execution/workflowid-runid#workflow-id-reuse-policy) - [RejectDuplicate](/workflow-execution/workflowid-runid#workflow-id-reuse-policy) **Examples** ```bash tctl workflow start --workflowidreusepolicy AllowDuplicate tctl workflow start --workflowidreusepolicy AllowDuplicateFailedOnly tctl workflow start --workflowidreusepolicy RejectDuplicate ``` :::note Multiple Workflows with the same Id cannot be run at the same time ::: ### --input Pass input for the Workflow. Input must be in JSON format. For multiple JSON objects, pass each in a separate `--input` option. Use `null` for null values. Alias: `-i` **Example** ```bash tctl workflow start --input ``` ### --input_file Pass input for the Workflow from a JSON file. For multiple JSON objects, concatenate them and use spaces or newline characters as separators. Input from the command line overwrites input from the file. **Example** ```bash tctl workflow start --input_file ``` ### --memo_key Pass a key for a memo. For multiple keys, concatenate them and use spaces as separators. **Example** ```bash tctl workflow start --memo_key ``` ### --memo Pass information for a [memo](/workflow-execution#memo) from a JSON file. Memos are immutable key/value pairs that can be attached to a workflow run when starting the workflow. Memos are visible when listing workflows. For multiple memos, concatenate them and use spaces as separators. The order must match the order of keys in `--memo_key`. **Example** ```bash tctl workflow start \ -tq your-task-queue \ -wt your-workflow \ -et 60 \ -i '"temporal"' \ -memo_key '' \ -memo '' ``` ### --memo_file Pass information for a memo from a JSON file. For multiple JSON objects, concatenate them and use spaces or newline characters as separators. The order must match the order of keys in `--memo_key`. **Example** ```bash tctl workflow start --memo_file ``` ### --search_attr_key Specify a [Search Attribute](/search-attribute) name. For multiple names, concatenate them and use pipes (`|`) as separators. To list valid Search Attributes, use the `tctl cluster get-search-attributes` command. **Example** ```bash tctl workflow start --search_attr_key ``` ### --search_attr_value Specify a [Search Attribute](/search-attribute) value. For multiple values, concatenate them and use pipes (`|`) as separators. If a value is an array, use JSON format, such as `["a","b"]`, `[1,2]`, `["true","false"]`, or `["2022-06-07T17:16:34-08:00","2022-06-07T18:16:34-08:00"]`. To list valid Search Attributes and value types, use the `tctl cluster get-search-attributes` command. **Example** ```bash tctl workflow start --search_attr_value ``` ## terminate The `tctl workflow terminate` command terminates a [Workflow Execution](/workflow-execution). Terminating a running Workflow Execution records a `WorkflowExecutionTerminated` event as the closing event in the History. No more [Workflow Task](/tasks#workflow-task) will be scheduled. See also [`tctl workflow cancel`](#cancel). `tctl workflow terminate --query ` The following modifiers control the behavior of the command. ### --workflow_id _Required modifier_ Specify a [Workflow Id](/workflow-execution/workflowid-runid#workflow-id). Alias: `-w` **Example** ```bash tctl workflow terminate --workflow_id ``` ### --run_id Specify a [Run Id](/workflow-execution/workflowid-runid#run-id). If `run_id` is not specified, `tctl` terminates the last Workflow Execution for the specified `workflow_id`. Alias: `-r` **Example** ```bash tctl workflow terminate --run_id ``` ### --reason Specify a reason for terminating the [Workflow Execution](/workflow-execution). **Example** ```bash tctl workflow terminate --workflow_id --reason ``` --- ## Troubleshoot the blob size limit error The `BlobSizeLimitError` is an error that occurs when the size of a blob (payloads including Workflow context and each Workflow and Activity argument and return value) exceeds the set limit in Temporal. - The max payload for a single request is 2 MB. - The max size limit for any given [Event History](/workflow-execution/event#event-history) transaction is 4 MB. ## Why does this error occur? This error occurs when the size of the blob exceeds the maximum size allowed by Temporal. This limit helps ensure that the Temporal Service prevents excessive resource usage and potential performance issues when handling large payloads. ## How do I resolve this error? To resolve this error, reduce the size of the blob so that it is within the 4 MB limit. There are multiple strategies you can use to avoid this error: 1. Use compression with a [custom payload codec](/payload-codec) for large payloads. - This addresses the immediate issue of the blob size limit; however, if blob sizes continue to grow this problem can arise again. 2. Break larger batches of commands into smaller batch sizes: - Workflow-level batching: 1. Modify the Workflow to process Activities or Child Workflows into smaller batches. 2. Iterate through each batch, waiting for completion before moving to the next. - Workflow Task-level batching: 1. Execute Activities in smaller batches within a single Workflow Task. 2. Introduce brief pauses or sleeps (for example, 1ms) between batches. 3. Consider offloading large payloads to an object store to reduce the risk of exceeding blob size limits: 1. Pass references to the stored payloads within the Workflow instead of the actual data. 2. Retrieve the payloads from the object store when needed during execution. --- ## Troubleshoot the deadline-exceeded error All requests made to the [Temporal Service](/temporal-service) by the Client or Worker are [gRPC requests](https://grpc.io/docs/what-is-grpc/core-concepts/#deadlines). Sometimes, when these frontend requests can't be completed, you'll see this particular error message: `Context: deadline exceeded`. Network interruptions, timeouts, server overload, and Query errors are some of the causes of this error. The following sections discuss the nature of this error and how to troubleshoot it. ### Check system clocks Timing skew can cause the system clock on a Worker to drift behind the system clock of the Temporal Service. If the difference between the two clocks exceeds an Activity's Start-To-Close Timeout, an `Activity complete after timeout` error occurs. If you receive an `Activity complete after timeout` error alongside `Context: deadline exceeded`, check the clocks on the Temporal Service's system and the system of the Worker sending that error. If the Worker's clock doesn't match the Temporal Service, synchronize all clocks to an NTP server. ### Check Frontend Service logs :::note Cloud users cannot access some of the logs needed to diagnose the source of the error. If you're using Temporal Cloud, create a [support ticket](/cloud/support#support-ticket) with as much information as possible, including the Namespace Name and the Workflow Ids of some Workflow Executions in which the issue occurs. ::: [Frontend Service](/temporal-service/temporal-server#frontend-service) logs can show which parts of the Temporal Service aren't working. For the error to appear, a service pod or container must be up and running. OSS users can verify that the Frontend Service is connected and running by using the Temporal CLI. ``` temporal operator cluster health --address 127.0.0.1:7233 ``` Use [`grpc-health-probe`](https://github.com/grpc-ecosystem/grpc-health-probe) to check the Frontend Service, [Matching Service](/temporal-service/temporal-server#matching-service), and [History Service](/temporal-service/temporal-server#history-service). ``` ./grpc-health-probe -addr=frontendAddress:frontendPort -service=temporal.api.workflowservice.v1.WorkflowService ./grpc-health-probe -addr=matchingAddress:matchingPort -service=temporal.api.workflowservice.v1.MatchingService ./grpc-health-probe -addr=historyAddress:historyPort -service=temporal.api.workflowservice.v1.HistoryService ``` Logs can also be used to find failed Client [Query](/sending-messages#sending-queries) requests. ### Check your Temporal Service metrics Temporal Service metrics can be used to detect issues (such as `resource exhausted`) that impact Temporal Service health. A `resource exhausted` error can cause your client request to fail, which prompts the `deadline exceeded` error. Use the following query to check for errors in `RpsLimit`, `ConcurrentLimit` and `SystemOverloaded` on your metrics dashboard. ``` sum(rate(service_errors_resource_exhausted{}[1m])) by (resource_exhausted_cause) ``` Look for high latencies, short timeouts, and other abnormal [Temporal Service metrics](/references/cluster-metrics). If the metrics come from a specific service (such as History Service), check the service's health and performance. ### Check Workflow logic Check your [Client and Worker configuration](/references/configuration) files for missing or invalid target values, such as the following: - Server names - Network or host addresses - Certificates Invalid targets also cause `connection refused` errors alongside `deadline exceeded`. Check that the Client connects after updating your files. ### Advanced troubleshooting In addition to the steps listed in the previous sections, check the areas mentioned in each of the following scenarios. ### After enabling mTLS Check the health of the Temporal Service with `temporal operator cluster health`. ``` temporal operator cluster health --address [SERVER_ADDRESS] ``` Add any missing [environment variables](/references/web-ui-environment-variables) to the configuration files, and correct any incorrect values. Server names and certificates must match between Frontend and internode. ### After restarting the Temporal Service You might not be giving the Temporal Service enough time to respond and reconnect. Restart the Server, wait, and then check all services for connectivity and further errors. If the error persists, review your Workflow Execution History and server logs for more specific causes before continuing to troubleshoot. ### When executing or scheduling Workflows One or more services might be unable to connect to the [Frontend Service](/temporal-service/temporal-server#frontend-service). The Workflow might be unable to complete requests within the given connection time. Increase the value of `frontend.keepAliveMaxConnectionAge` so that requests can be finished before the connection terminates. :::note If you increase `frontend.keepAliveMaxConnectionAge` values, consider monitoring your server performance for load. ::: --- Still unable to resolve your issue? - If you use Temporal Cloud, create a [support ticket](/cloud/support#support-ticket). - If you use our open source software or Temporal Cloud, check for similar questions and possible solutions in our [community forum](https://community.temporal.io) or [community Slack](https://temporal.io/slack). --- ## Error Handling and Troubleshooting Even the most reliable systems can encounter issues. Our troubleshooting guides are designed to help you quickly identify and resolve potential errors, ensuring your Temporal applications run smoothly and efficiently. - [Troubleshoot the BlobSizeLimitError](/troubleshooting/blob-size-limit-error): The `BlobSizeLimitError` happens when the size of a blob (payloads including Workflow context and each Workflow and Activity argument and return value) is too large. The maximum payload for a single request is 2 MB, and the maximum size for any Event History transaction is 4 MB. - [Troubleshoot the Deadline-Exceeded Error](/troubleshooting/deadline-exceeded-error): The "Context: deadline exceeded" error occurs when requests to the Temporal Service by the Client or Worker cannot be completed. This can be due to network issues, timeouts, server overload, or Query errors. - [Troubleshoot the Failed Reaching Server Error](/troubleshooting/last-connection-error): The message "Failed reaching server: last connection error" often happens due to an expired TLS certificate or during the Server startup process when Client requests reach the Server before roles are fully initialized. --- ## Troubleshoot the failed reaching server error The message `Failed reaching server: last connection error` can often result from an expired TLS certificate or during the Server startup process, in which the Client requests reach the Server before the roles are fully initialized. This troubleshooting guide shows you how to do the following: - Verify the certification expiration date - Renew the certification - Update the server configuration ### Verify TLS certification expiration date The first step in troubleshooting this error is to verify the expiration date of the TLS certification. Then you can renew the certification and update the server configuration. Choose one of the following methods to verify the expiration date of the TLS certification: **Verify the expiration date of the TLS certification** List the expiration date with the following command: ```command tcld namespace accepted-client-ca list \ --namespace . | \ jq -r '.[0].notAfter' ``` If the returned date is in the past, the certificate has expired. **Existing certificate management infrastructure** If you are using an existing certificate management infrastructure, use it to verify the TLS connection. For example, if you are using OpenSSL, run the following command: ```command openssl s_client -connect -showcerts -cert ~/certs/path.pem -key .~/certs/path.key -tls1_2 ``` **Self-signed certificate** If you are using a self-signed certificate, run the following Temporal CLI command: ```command temporal namespace describe \ --namespace . \ --address \ --tls-cert-path \ --tls-key-path ``` Your Namespace gRPC endpoint is available on the details page for your [Temporal Cloud Namespace](https://cloud.temporal.io/namespaces). ### Renew TLS certification If the certificate has expired or is about to expire, the next step is to renew it. You can do this by contacting the certificate authority (CA) that issued the certificate and requesting a renewal. **Existing certificate management infrastructure** If you are using an existing certificate management infrastructure, contact the administrator of the infrastructure to renew the certificate. **Self-signed certificate** If you are using a self-signed certificate or don't have an existing infrastructure, you can generate a new certificate using OpenSSL, [step CLI](https://github.com/smallstep/cli), or similar tools. For information on generating a self-signed certificate, see [Control authorization](/cloud/certificates#control-authorization). ### Update the CA certification in the server configuration Update the new CA certificate in the Temporal Cloud server configuration. You can update certificates using any of the following methods: - [Update certificates using Temporal Cloud UI](/cloud/certificates#update-certificates-using-temporal-cloud-ui) - [Update certificates using tcld](/cloud/certificates#update-certificates-using-tcld) After you update the TLS certification in the server configuration, retry your connection. ### Set reminders Don't let your certificates expire. Add reminders to your calendar to issue new CA certificates well before the expiration dates of the existing ones. ### Additional resources The preceding steps should help you troubleshoot the `failed reaching server: last connection error` error caused by an expired TLS certificate. If this issue persists, verify that the Client you are using to connect to the server is using the correct TLS certification and that the Client requests reach the server after the roles are fully initialized. If you still need help, [create a support ticket](/cloud/support#support-ticket). --- ## Performance bottlenecks troubleshooting guide This guide outlines common performance bottlenecks in Temporal Workers and Clients. It covers key latency metrics and root causes of high values, and provides diagnostic steps and troubleshooting strategies. These metrics can help you optimize Temporal deployments and Workflow execution. To get the most out of this guide, you should be familiar with [Temporal architecture](/temporal), [Workflows](/workflows), [Activities](/activities), and [Task Queues](/task-queue). You should also know how to use key metrics like latency, counter, rate, CPU utilization, and memory usage. ## Task processing metrics These metrics provide insights into various stages of the [Task](/tasks) lifecycle, from scheduling to completion. The following sections detail common metrics, their potential causes for high latency or resource depletion, and strategies for diagnosing and resolving performance issues. ### `temporal_workflow_task_schedule_to_start_latency` spike High [`temporal_workflow_task_schedule_to_start_latency`](/references/sdk-metrics#workflow_task_schedule_to_start_latency) (P95 higher than one second) can be caused by several factors. This metric represents the time between when a [Workflow Task](/tasks#workflow-task) is scheduled (enqueued) and when it is picked up by a Worker for processing. Here are some potential causes: - Insufficient Worker capacity: If there aren't enough Workers or if the Workers are overloaded, they may not be able to pick up Tasks quickly enough. This can lead to Tasks waiting longer in the queue ([Detect Task Backlog](https://docs.temporal.io/cloud/worker-health#detect-task-backlog)). - Worker configuration issues: Improperly configured Workers, such as having too few pollers or Task slots, can lead to increased latency ([Detect Task Backlog](https://docs.temporal.io/cloud/worker-health#detect-task-backlog)). - High Workflow lock latency: If many updates are made to a single execution, this can cause Workflow lock latency, which in turn affects the Schedule-to-start latency. Reduce the rate of Signals. - Network latency: Workers in a different region from the Temporal cluster, or large payload size, can introduce additional latency. To diagnose and address high `temporal_workflow_task_schedule_to_start_latency`, you should: 1. Check Worker CPU and memory usage. 1. Review Worker configuration (number of pollers, Task slots, etc.). 1. Look for any spikes in Workflow or Activity starts that might be overwhelming the system. 1. Ensure Workers are in the same region as the Temporal cluster if possible. ### `temporal_activity_schedule_to_start_latency` spike High [`temporal_activity_schedule_to_start_latency`](/references/sdk-metrics#activity_schedule_to_start_latency) (P95 higher than one second) can be caused by several factors. This metric represents the time between when an [Activity Task](/tasks#activity-task) is scheduled (enqueued) and when it is picked up by a Worker for processing. Here are some potential causes: - Insufficient Worker capacity: If there aren't enough Workers or if the Workers are overloaded, they may not be able to pick up Tasks quickly enough. This can lead to Tasks waiting longer in the queue ([Detect Task Backlog](https://docs.temporal.io/cloud/worker-health#detect-task-backlog)). - Worker configuration issues: Improperly configured Workers, such as having too few pollers or Task slots, can lead to increased latency ([Detect Task Backlog](https://docs.temporal.io/cloud/worker-health#detect-task-backlog)). - Task Queue configuration: Setting `TaskQueueActivitiesPerSecond` too low can limit the rate at which Activities are started, leading to increased Schedule-to-start latency. - Network latency: Workers in a different region from the Temporal cluster, or large payload size can introduce additional latency. To diagnose and address high `temporal_activity_schedule_to_start_latency`: 1. Check Worker CPU and memory usage. 1. Review Worker configuration (number of pollers, Task slots, etc.). 1. Look for any spikes in Workflow or Activity starts that might be overwhelming the system. 1. Ensure Workers are in the same region as the Temporal cluster if possible. ### `temporal_workflow_endtoend_latency` spike The [`temporal_workflow_endtoend_latency`](/references/sdk-metrics#workflow_endtoend_latency) metric represents the total Workflow Execution time from Schedule to the closure for a single Workflow Run. Normal ranges for this metric depend on the use case, but here are some potential causes for the unexpected spikes: - Complex Workflows: If the Workflows have many Activities or if the Activities take a long time to execute. - Workflow and Activity retries: If Workflows or Activities are configured to retry upon failure and they fail often, this can increase the end-to-end latency as the system will wait for the retry delay before reattempting the failed operation. - Worker capacity and configuration: If there aren't enough Workers or if the Workers are overloaded, they may not be able to pick up and process Tasks quickly enough. This can lead to Tasks waiting longer in the queue, thereby increasing the end-to-end latency ([Detect Task Backlog](https://docs.temporal.io/cloud/worker-health#detect-task-backlog)). - External dependencies: If your Workflows or Activities depend on external systems or services (such as databases or APIs) and these systems are slow or unreliable, they can increase the end-to-end latency. - Network latency: Workers in a different region from the Temporal cluster can introduce additional latency. To diagnose and address high `temporal_workflow_endtoend_latency`: 1. Review your Workflow and Activity designs to ensure they are as efficient as possible. 2. Monitor your Workers to ensure they have sufficient capacity (CPU and memory) and are not overloaded. 3. Monitor your external dependencies to ensure they are performing well. 4. Ensure Workers are in the same region as the Temporal cluster if possible. ### High `temporal_workflow_task_execution_latency` The [`temporal_workflow_task_execution_latency`](/references/sdk-metrics#workflow_task_execution_latency) metric represents the time taken by a Worker to execute a Workflow Task. The Temporal SDK raises a “Deadlock detected during Workflow run” error or [TMPRL1101](https://github.com/temporalio/rules/blob/main/rules/TMPRL1101.md) when a Workflow Task takes more than one or two seconds to complete. Here are some potential causes: - CPU-intensive work: Performing CPU-intensive operations in your Workflow Task can lead to slow execution. - Slow local Activities: Workflow Task execution time includes the Local Activity execution time. - Slow Workflow replay: Workflow Task execution time includes the Workflow Replay time. Refer to `workflow_task_replay_latency` for more details. - Worker resource constraints: High CPU usage on Worker pods can lead to slower Workflow Task execution. Workers with insufficient CPU resources can cause delays. - Infinite loops or blocking calls: Workflow code with infinite loops or blocking external API calls can cause the Workflow Task to execute slowly or time out. - Slow data conversion: Your custom Data Converter is taking too long to encode/decode payloads, for example, when talking to a remote encryption service. To diagnose and address slow Workflow Task execution, you can: 1. Monitor Worker CPU and memory utilization. 2. Ensure that your Workers have adequate resources and are properly scaled for your workload. 3. Consider running your Workflow code in a profiler using a replayer to see where CPU cycles are spent. 4. Review your Workflow code for potential optimizations or to remove blocking operations. 5. Disable deadlock detection for Data Converter: It does not reduce Task execution latency but does remove the “Deadlock detected during Workflow run” or TMPRL1101 error. In Go, wrap it with `workflow.DataConverterWithoutDeadlockDetection`. In Java, surround your Data Converter code with `WorkflowUnsafe.deadlockDetectorOff`. ### High `workflow_task_replay_latency` Workflow Task replay is the process of reconstructing the Workflow's state by re-executing the Workflow code from the beginning, using the recorded Event History. This process ensures that the Workflow can continue from where it left off, even after interruptions or failures. [`workflow_task_replay_latency`](/references/sdk-metrics#workflow_task_replay_latency) is high if it exceeds a few milliseconds. Here are the main causes: - Large Event Histories: Workflows with long histories take more time to replay, as the Worker needs to process all events to reconstruct the Workflow state. - Data Converter performance: Slow Data Converters, especially those that perform encryption or interact with external services, can impact replay. - Large payloads: Activities or Signals with large payloads can slow down the replay process, especially if the Data Converter needs to process these payloads. - Complex Workflow logic: Workflow code with complex logic or computationally intensive operations, such as scheduling many concurrent child Workflows or Activities, can increase replay latency. - Frequent cache evictions: If workers often evict Workflow Executions from their cache (due to memory constraints or frequent restarts), it leads to more replays and higher latency. - Worker resource constraints: High CPU utilization or memory pressure on Worker nodes can slow down the replay. To diagnose and address slow Workflow Task replay, you can: 1. Monitor SDK Metrics: Keep a close eye on the `temporal_workflow_task_replay_latency` metric. This histogram metric measures the time it takes to replay a Workflow Task. 1. Analyze Workflow History Size: Check the number of events in your Workflow histories and consider using the "Continue-As-New" feature for long-running Workflows. 1. Optimize Data Converters: If you're using custom Data Converters, especially for encryption or complex serialization, look for opportunities to optimize their performance. 1. Review Payload Sizes: Large Activity or Signal payloads can slow down replay. Consider optimizing the size of data being passed in your Workflows. 1. Profile Workflow Code: Use a profiler to identify CPU-intensive parts of your Workflow code that might be slowing down replay. 1. Manage Worker Cache: Frequent cache evictions can lead to more replays. Tune your Worker's cache size and eviction policies. ### `temporal_activity_execution_latency` spike The [`temporal_activity_execution_latency`](/references/sdk-metrics#activity_execution_latency) metric measures the time from when a Worker starts processing an Activity Task until it reports to the service that the Task is complete or failed. There are several potential causes for high `temporal_activity_execution_latency`: - Activity implementation: The most common cause of high Activity Execution latency is the actual implementation of the Activity itself. If the Activity is performing time-consuming operations or making slow external API calls, it will take longer to execute. - External dependencies: If your Activity is constrained by an external resource or service that all Activities access, it could cause increased latency. - Worker resource constraints: Under-resourced Worker nodes or experiencing high CPU utilization can lead to slower Activity Execution. - Network latency: High latency between your Workers and external services or the Temporal service itself can contribute to increased Activity Execution time. To diagnose and address high Activity Execution latency: 1. Monitor the `activity_execution_latency` metric, which you can filter by Activity type and Activity Task queue. 2. Optimize your Activity implementation to reduce latency, especially with external services or database interactions. 3. Check your Worker CPU and memory utilization to make sure they have adequate resources. 4. Examine your Worker configuration, particularly `(Max)ConcurrentActivityExecutionSize` and `(Max)WorkerActivitiesPerSecond`, to ensure they are not limiting your activity execution. ### Depletion of `temporal_worker_task_slots_available` for `WorkflowWorker` The [`temporal_worker_task_slots_available{worker_type=”WorkflowWorker”}`](/references/sdk-metrics#worker_task_slots_available) metric indicates the number of available slots for executing Workflow Tasks on a Worker. This metric may go to zero for several reasons: - High Workflow Task Load: If there are more Tasks than the Worker can handle concurrently, the available slots will be depleted. This can happen if the rate of incoming Tasks is higher than the rate at which tasks are being completed. - Worker Configuration: The number of available slots is determined by the Worker configuration, specifically the `MaxConcurrentWorkflowTaskExecutionSize` setting. If these are set too low, the Worker may not have enough slots to handle the Task load. - High `temporal_workflow_task_execution_latency` and `workflow_task_replay_latency`. To prevent depletion of Workflow Task slots: 1. Monitor Worker CPU and Memory usage while increasing `(Max)ConcurrentWorkflowTaskExecutionSize` to add more execution slots. 2. Scale up Workers both vertically (increasing CPU and Memory) and horizontally (increasing Worker instances). ### Depletion of `temporal_worker_task_slots_available` for `ActivityWorker` The [`temporal_worker_task_slots_available{worker_type=”ActivityWorker”}`](/references/sdk-metrics#worker_task_slots_available) metric indicates the number of available slots for executing Activity Tasks on a Worker. This metric may go to zero for several reasons: - Blocked Activities and Zombie Activities: The most common cause is activities that are blocked or not returning on time. Zombie Activities are a subset of this category. They occur when an Activity times out (hits its `StartToClose` or `HeartbeatTimeout` timeout) and has stopped Heartbeating but continues to run, occupying some or all the slots as more retries occur. This can happen if: - The Activity code is blocking on a downstream service call or an infinite loop. - There's a mismatch between the Activity's `StartToClose` timeout and any client-side timeouts for external calls. - Resource Utilization: High CPU or memory usage on Workers can cause activities to block and not release slots. To prevent depletion of Activity Task slots: 1. Monitor Worker CPU and Memory usage while increasing `(Max)ConcurrentActivityExecutionSize` to add more execution slots. 2. Add client-side timeout to your downstream API client. 3. Review your Task code to ensure Tasks complete within a reasonable time measured by `temporal_activity_execution_latency`. ## Network requests Network issues can impact Temporal clients and workers, leading to delays, failures, and overall system instability. This section focuses on metrics that reveal common network-related problems with your Temporal deployment, specifically related to network connectivity, latency, and request failures. These metrics can indicate where bottlenecks exist within the communication channels between Temporal clients (including Temporal Workers) and the Temporal server. ### High `temporal_long_request_failure` The [`temporal_long_request_failure`](/references/sdk-metrics#long_request_failure) metric counts the number of failed RPC long poll requests for `PollWorkflowTaskQueue`, `PollActivityTaskQueue`, and `GetWorkflowExecutionHistory` (when polling new events). High values of this metric can be caused by several factors: - Network Issues: Problems with the network connection between the Temporal Client and the Temporal Server, including firewalls and proxies, can cause long poll requests to fail. - Rate Limiting: If the rate of requests exceeds the configured limits on the Temporal Server or Temporal Cloud, additional requests may be rejected, increasing the `temporal_long_request_failure` count. This is often indicated by a `ResourceExhausted` status code. - Server Errors: If the Temporal Server is experiencing issues, it may fail to respond to long poll requests correctly, leading to an increase in `temporal_long_request_failure`. To diagnose the cause of high `temporal_long_request_failure`, you can: 1. Check the operation and the status or code tag of the `temporal_long_request_failure` metric to see the type of errors that are occurring. 2. If you receive a `ResourceExhausted` status code, review the rate limits configured on the Temporal Server or ask for help from Temporal Support for Temporal Cloud. 3. Check the network connection between the Temporal Client and the Temporal Server. ### High `temporal_request_failure_total` The [`temporal_request_failure_total`](/references/sdk-metrics#request_failure) metric counts the number of RPC requests made by the Temporal Client that have failed. High values of this metric can be caused by several factors: - Network Issues: Problems with the network connection between the Temporal Client and the Temporal Server can cause requests to fail. - Client Errors: If there's an issue with the Temporal Client, such as misconfiguration or resource exhaustion, it may fail to make requests correctly. - Operation Errors: Specific operations like `SignalWorkflowExecution` or `TerminateWorkflowExecution` can fail if they are trying to act on a closed Workflow Execution that no longer exists (because it completed and was removed from persistence when it hit Namespace retention time). - Rate Limiting: If the rate of requests exceeds the configured limits on the Temporal Server, additional requests may be rejected, increasing the counter. This is often indicated by a `ResourceExhausted` status code. - Request Size Limit: If the Worker tries to return an Activity response that is larger than the blob size limit (2MB), the service will reject it, causing a request failure. - Server Errors: If the Temporal Server is experiencing issues, it may fail to respond to requests correctly, leading to an increase in `temporal_request_failure_total`. To diagnose the cause of high `temporal_request_failure_total`, you can: 1. Check the status or code tag of the `temporal_request_failure_total` metric to see the type of errors that are occurring. 2. Look at the operation tag of the `temporal_request_failure_total` metric to see which operations are failing. 3. Monitor the Temporal Server logs and the Temporal Client logs for any error messages or warnings. 4. Check the network connection between the Temporal Client and the Temporal Server. ### High `temporal_request_latency` The [`temporal_request_latency`](/references/sdk-metrics#request_latency) metric measures the latency of gRPC requests made by the Temporal Client. High values for this metric can be caused by several factors: - Network Latency: The physical distance and network conditions between the Temporal Client and the Temporal Server can affect the latency of requests. - Network Transfer Time: Larger payloads take longer to transfer over the network, which affects request latency. For example, large payloads in `RespondWorkflowTaskCompleted` can affect the latency of the request. This is especially true when Workflows are scheduling multiple activities with large inputs. - Resource Exhaustion: Running out of resources (such as CPU or memory) on the client or server can cause delays in processing the request. - Client Configuration: Improper client configuration, such as setting thread pool sizes too aggressively or having memory constraints that are too low for the number of allocated threads, can lead to situations where Tasks overwhelm the client, causing increased latency. - Server Load: If the Temporal Server is under heavy load, it may take longer to respond to requests, leading to increased latency. To diagnose and address high `temporal_request_latency`: 1. Monitor the `temporal_request_latency` metric to identify when and where latency spikes are occurring. 2. Check the network connection between the Temporal Client and the Temporal Server. 3. Monitor the resource usage on both the Temporal Client and the Temporal Server. 4. Review your Temporal Client configuration to ensure it is optimized for your workload. 5. If you're using Temporal Cloud, check if the Cloud’s [service-latency](https://docs.temporal.io/cloud/metrics/reference#service-latency) metric spikes up and reach out to Temporal Support for help. ### `rate(temporal_long_request_total{operation="PollActivityTaskQueue"})` The [`rate(temporal_long_request_total{operation="PollActivityTaskQueue"})`](/references/sdk-metrics#long_request) expression measures the per-second average rate of `PollActivityTaskQueue` long poll requests over a certain period of time. `PollActivityTaskQueue` is an operation where Workers poll for Activity Tasks from the Task Queue. The `temporal_long_request_total` metric counts the number of these long poll requests. By applying the `rate()` function in Prometheus, you can calculate the per-second average rate of these requests over the time range specified in the Query. This can help you understand the load on your Temporal service and how often your Workers are polling for Activity Tasks. ### `rate(temporal_long_request_total{operation="PollWorkflowTaskQueue"})` The [`rate(temporal_long_request_total{operation="PollWorkflowTaskQueue"})`](/references/sdk-metrics#long_request) expression measures the per-second average rate of `PollWorkflowTaskQueue` long poll requests over a certain period of time. `PollWorkflowTaskQueue` is an operation where Workers poll for Workflow Tasks from the Task Queue. The `temporal_long_request_total` metric counts the number of these long poll requests. By applying the `rate()` function in Prometheus, you can calculate the per-second average rate of these requests over the time range specified in the query. This can help you understand the load on your Temporal service and how often your Workers are polling for Workflow Tasks. ## Caching Temporal Workers relies on caching to optimize performance by reducing the overhead of fetching Workflow state from the history and Replaying. However, unlimited caching is impossible; there's a trade-off between the benefits of cached data and the memory consumed. These metrics allow you to balance performance gains with responsible memory usage. ### `temporal_sticky_cache_size` The [`temporal_sticky_cache_size`](/references/sdk-metrics#sticky_cache_size) metric represents the number of Workflow executions currently cached in a Worker's memory. The sticky cache is used to improve performance by keeping the Workflow state in memory, reducing the need to reconstruct the Workflow from its Event History for every Task. It’s particularly useful for latency-sensitive Workflows. There is a direct relationship between the sticky cache size and Worker memory consumption. As the cache size increases, so does the memory usage of the Worker. The maximum size of the sticky cache can be configured. For example, the default in the Go SDK is 10,000 Workflows. A larger sticky cache can improve performance by reducing the need to replay Workflow histories. However, it also increases memory usage, which can lead to issues if not properly managed. Monitor this metric alongside Worker memory usage. A sudden increase in `sticky_cache_size` can correlate with increased memory consumption and potential performance issues. If memory consumption is too high, you can reduce the maximum sticky cache size. Conversely, if you have available memory and want to improve performance, you might increase it. ### `temporal_sticky_cache_hit_total` and `temporal_sticky_cache_miss_total` The [`temporal_sticky_cache_hit_total`](https://docs.temporal.io/references/sdk-metrics#sticky_cache_hit) metric is a counter that measures the total number of times a Workflow Task found a cached Workflow Execution to run against, and the opposite is [`temporal_sticky_cache_miss_total`](https://docs.temporal.io/references/sdk-metrics#sticky_cache_miss), which is a counter that measures the total number of times a Workflow Task did not find a cached Workflow Execution to run against. Sticky Execution is a feature where a Worker caches a Workflow Execution and creates a dedicated Task Queue to listen on. This improves performance because the Temporal Service only sends new events to the Worker instead of entire Event Histories, and the Workflow doesn't have to Replay. A “hit” means the Worker finds the Workflow in its cache when processing a Workflow Task, allowing immediate processing without fetching the full Event History from the server and Replaying. A "miss" means the Worker didn't find the Workflow in its cache, and it must fetch the Event History and Replay. Monitoring these two metrics and comparing them can help you understand how your sticky cache is being used. A high rate of cache hits with a low rate of cache misses indicates that your Workflows are being scheduled efficiently, with minimal need for fetching Event Histories and Replaying. ### `temporal_sticky_cache_total_forced_eviction_total` The [`temporal_sticky_cache_total_forced_eviction_total`](https://docs.temporal.io/references/sdk-metrics#sticky_cache_hit) metric is a counter that measures the total number of Workflow Executions that have been forcibly evicted from the sticky cache. Sticky Execution is a feature where a Worker caches a Workflow Execution and creates a dedicated Task Queue to listen on. This improves performance because the Temporal Service only sends new events to the Worker instead of entire Event Histories, and the Workflow doesn't have to Replay. A "forced eviction" in this context means that a Workflow Execution was removed from the cache before it completed, typically because the cache was full and needed to make room for other Workflow Executions. This means that if the Worker needs to process more Tasks for the evicted Workflow Execution, it will have to fetch the entire Event History from the Temporal Service and Replay. Monitoring the `temporal_sticky_cache_total_forced_eviction_total` metric can help you understand how often your Workflows are being evicted from the cache. A high rate of forced evictions could indicate that your cache size is too small for your workload, and you may need to increase the `WorkflowCacheSize` setting if your Worker resources can accommodate it. --- ## Use These Docs with AI Connect Temporal documentation directly to your AI assistant for accurate, up-to-date answers about Temporal. The Temporal docs MCP server gives AI tools real-time access to our documentation, so responses draw from current docs rather than training data. The server requires anonymous authentication using any Google account to enforce rate limits and prevent abuse. We cannot see nor do we collect any contact information from this. ## Claude Code Add the Temporal docs MCP server to Claude Code with a single command: ```bash claude mcp add --scope user --transport http temporal-docs https://temporal.mcp.kapa.ai ``` This adds the server globally so it's available in all your projects. To add it to a specific project only (stored in `.mcp.json`): ```bash claude mcp add --transport http temporal-docs https://temporal.mcp.kapa.ai ``` After adding, restart Claude Code and run `/mcp` to authenticate with your Google account. ## Claude Desktop 1. Open Claude Desktop settings 2. Navigate to **Settings > Connectors** 3. Add a new MCP server with the URL: `https://temporal.mcp.kapa.ai` ## Other MCP-compatible tools For any tool that supports the Model Context Protocol, use the following server URL: ``` https://temporal.mcp.kapa.ai ``` Configuration format varies by tool. Here's a generic JSON configuration: ```json { "mcpServers": { "temporal-docs": { "transport": "http", "url": "https://temporal.mcp.kapa.ai" } } } ```