# Crafting > This page talks about how to manage access to the Crafting sandbox. --- # Access control in sandbox Source: https://docs.sandboxes.cloud/docs/access-control.md This page talks about how to manage access to the Crafting sandbox. The Crafting platform provides a collaborative development experience by allowing members in the same organization to access each other's sandboxes. In certain instances, some sandboxes are not expected to be broadly accessible by all the members, as they may contain some sensitive information requiring restricted access. This is facilitated by **Sandbox Access Control**. ## Private mode sandbox The access level of a sandbox can be changed at any time to be one of the following: * `Default`: all members can access it for collaboration; * `Private`: only the owner can access it. The *accessibility* related to the access level includes any of the following operations, effectively locking down the access to the file system in the workspaces. * SSH into any workspaces in the sandbox, including all SSH-based operations, like scp, rsync, mutagen, VS Code, etc; * Launch Web IDE to access a workspace; * Update the sandbox; * Delete the sandbox. When the access level is raised to `Private`, non-owners can't do any of the above operations, but are still able to: * List the sandbox; * Read the information about the sandbox including the full definition (endpoints, workspaces, dependencies, etc.). The `ADMIN` users can't access the sandbox via SSH or Web IDE if they are not the owner, however, they are able to update or delete the sandbox regardless of the access level. ## Personal secrets are only mounted in private mode The [secrets](https://docs.sandboxes.cloud/docs/secrets) mounted in the sandbox will be changed automatically depending on the access level. The shared secrets are always mounted, and the private secrets are only mounted when the access level is `Private`. The change is applied automatically at the time when the access level is updated. ## How to set sandbox into private mode To change the access level of a sandbox, the owner (or administrator) can do it on web console. It can also be done via CLI command ```shell cs sandbox access [private|shared] ``` ## Use cases for private mode The key use cases for private mode include: * **Mount per-developer credentials for secure access to dev resources**: To make sure each developer only uses the credentials, e.g. access keys, assigned to them in their dev environment, these secrets will be stored as `Personal Secrets`. As described above, `Personal Secrets` are only mounted when the sandbox is in private mode to prevent other sandbox users from accessing them. * **Keep source code private**: In some cases, not all developers are supposed to have equal access levels to all the source code. To prevent the developers from access the source code that they are not supposed to access, the sandbox with those source code checked out should be set in private mode. If an organization has these use cases, they can choose to set `Private` as default for all new sandboxes launched. Please see [Organizational settings](https://docs.sandboxes.cloud/docs/org-settings) for details. ## Role-based access control (RBAC) The enterprise version of Crafting platform supports full-featured Role-based Access Control (RBAC), which allows administrator to define fine-grained control on templates, sandboxes, resources, etc., as well as folders for each team to group their resources. Please contact us at [contact@crafting.dev](mailto:contact@crafting.dev) for detailed description and user guide for RBAC. --- # Account Setup Source: https://docs.sandboxes.cloud/docs/account-setup.md Setting up account is the first thing need to be done for setting up Crafting. In this page, we talk about topics regarding account setup on Crafting platform. * [Create an Account](#create-an-account) * [Add Administrators and Users](#add-administrators-and-users) * [Domain-based automatic user creation](#domain-based-automatic-user-creation) * [SAML integration](#saml-integration) * [Service Account and Login Token](#service-account-and-login-token) * [Role-based Access Control (RBAC)](#role-based-access-control-rbac) ## Create an Account To create an account on Crafting for a new organization, please [contact us](https://crafting.dev/contact). We support account setup on our [SaaS platform](https://docs.sandboxes.cloud/docs/crafting-saas) or on your cloud using [Crafting Enterprise](https://docs.sandboxes.cloud/docs/crafting-enterprise). We also offer setup assistant and free trials to fit your specific dev needs. When installing [Crafting Express](https://docs.sandboxes.cloud/docs/crafting-express) the admin account is automatically created during the procedure. ## Add Administrators and Users Administrators can add other administrators and users into the organization. From the `Team -> Members` page on our web console, you can see every member in the organization account and add new member by click `Add` button, and fill in information in the following dialog. After clicking `Add`, an invite will be sent to the newly added member, who can login using their email to the system (See [Login](https://docs.sandboxes.cloud/docs/login)) It can also be done via CLI ```shell cs org member add [--admin] ``` Creating users one by one is often troublesome with large teams. Crafting supports following ways to support large teams. ### Domain-based automatic user creation On Crafting platform, the admin can set up an `Organization Domain`, that way, all users with emails from that domain can automatically be added as Crafting users for that organization. The user will be created on-demand upon first login. ### SAML integration For customers adopting [Crafting Enterprise](https://docs.sandboxes.cloud/docs/crafting-enterprise), a SAML identity provider can be configured to provide the login service for Crafting. For more details, please [contact us](https://crafting.dev/contact). ## Service Account and Login Token Crafting allows the user to create service accounts for accessing the Crafting platform, which would be useful in following scenarios: * Internal tool integration: Team admin can create a service account to be used by tools such as CI tool, automation scripts, etc., so that these tools can leverage Crafting platform. See [Git Service Integration for Preview](https://docs.sandboxes.cloud/docs/git-integration) for some example. * External demos or collaborations: Team admin can grant temporary access to external collaborators, e.g., customers, partners, vendors, etc., to let them access the Crafting system to work with the team members. To create a service account, go to `Team -> Service Accounts` and click `Add` highlighted below, and then fill in the name. Note that a service account always assumes the email only as a placeholder. After clicking `Confirm`, a service account is created. **A service account is an non-user account and can only be accessed by a login token**, so let's see how to create an login token to access that account. On the same page, click `Add` highlighted below, and fill the dialog. After creating the login-token, it can be logged from CLI and web browser. The instruction can be found by clicking the following expanding button highlighted below. ![image](https://files.readme.io/d3d46b3-image.png) The login token will expire according to the `Expire At`, and you can delete it any time. Similarly the `Service Account` can be disabled (by editing) or deleted in this page. ### Revoking access and best practices Once a `Login Token` is used, the active session is under the identity of a service account. Deleting the Login Token doesn't invalidate the session. To disable the session, disable/delete the Service Account. If the Service Account is to be shared with external contributors, create `Service Account` on-demand and delete it when no longer needed ### System service accounts These are reserved service accounts, with emails suffixed by `@sys.sandbox`. They are used by the Crafting Sandbox internally. The users may be able to see them, but not allowed to change them. The currently available service accounts are: * `support@sys.sandbox`: this account is used by the supporting personnel from Crafting to perform support operations in the organization, like troubleshooting. ## Role-based Access Control (RBAC) The Enterprise edition of Crafting supports Role-based Access Control (RBAC), which offers fine-grained access control for users in custom defined roles. The access control can be defined with respect to specific types of resources, such as templates, sandboxes, resources, etc. It also helps large engineering organizations to organize teams' assets into different folders. For more information regarding RBAC, please [contact us](https://crafting.dev/contact). --- # Adjust config of a sandbox Source: https://docs.sandboxes.cloud/docs/adjust-config.md This page described how to adjust config of an existing sandbox. No matter your sandbox is from a single repo in a single workspace, or created from a well-defined template. You can adjust its config dynamically. Crafting allows config changes to be made to a sandbox after it's already created. You can do smaller changes such as adjusting environment variables, port-forwarding, to significant changes such as changing snapshots, adding workspaces, containers, endpoints, or dependencies. After making changes, you can also choose to save the sandbox configuration as a template to make the changes persisted. For a sandbox created from a template, editing the configuration will disassociate the sandbox from the template. To allow editing, you can click `EDIT` button in the action menu, see below. After confirming, the sandbox will be editable as a [standalone sandbox](https://docs.sandboxes.cloud/docs/standalone-sandbox). For a sandbox not associated withy any template, a.k.a., [standalone sandbox](https://docs.sandboxes.cloud/docs/standalone-sandbox), you can directly edit the config by clicking the `Edit` button on the top-right corner and get into the editing view, as shown below. After getting into the editing view, you can adjust all configurations from the UI by directly clicking each card or edit the whole config as YAML file by clicking `Edit in YAML`. After editing, you can click `Apply` to apply the change to the sandbox. You can also test how the new configuration works for a brand new sandbox by `Try with New Sandbox` or save the config as a template by `Save as Template`. This is a great way to try any new configs safely because your modification in the configuration here will only affect this particular sandbox, and your other sandboxes or your teammate's sandboxes won't be affected. Details on how to adjust configurations can be found [here](https://docs.sandboxes.cloud/docs/standalone-sandbox). If you prefer you can also edit a sandbox from command-line using Crafting CLI ```shell cs sandbox edit ```text --- # Admin Overview Source: https://docs.sandboxes.cloud/docs/admin-overview.md In this chapter, we will talk about how to setup and manage Crafting from an administrator's point of view to boost your engineering team's productivity. ## Best Practices from Adopting Crafting With Crafting, engineering teams can follow the best practices for managing their development environment more easily. These best practices include: * Establish a standard repeatable dev environment for each project, shared by every developer * Manage packages and libraries centrally, upgrading them from one place * Enable end-to-end testing for each Pull Request in per-developer environments * Allow developers to create new environment on-demand and dispose them freely * Develop services in a production-like environment with multiple services running upstream/downstream instead of from single machine * Manage dev resources centrally, allocate them on-demand, and clean up after use * Control access to source code and other dev resources, store dev credentials properly Crafting provides tooling to help achieve these best practices with an emphasis on flexibility and customizability, so each administrator can setup Crafting system best fitting for their usage scenarios. ## Overview of the Setup Process The setup process varies greatly case by case. It can be as simple as creating an account and providing git access for just simply coding in an online workspace, which takes only a few minutes. Depending the usage scenario and complexity of your product, it's up to each administrator to decide how much effort is required for setting it up. Typically setting up more automations requires more effort, but at the same time provides better developer experience and improves productivity. Here is a high level overview of the setup process to provide some guidance on what you need to setup. * [Account Setup](https://docs.sandboxes.cloud/docs/account-setup): This is for managing accounts on Crafting, and required by everyone to setup. * [Git Access](https://docs.sandboxes.cloud/docs/git-access): This is for setting git access from Crafting sandbox to your git repository, which is the foundation of many automations and likely required in most teams * [Setup Templates for Dev Environments](https://docs.sandboxes.cloud/docs/templates-setup): These pages talk about how to create templates to make the dev environment repeatable and optimize the experience. Most teams need to set up one or more templates. There are many topics in this part, some of which could be crucial for best dev experience in your scenario. Depending on your dev environment needs, you may not need to setup everything mentioned in this section, so please choose the relevant components to set up. * [Git Service Integration for Preview](https://docs.sandboxes.cloud/docs/git-integration): This for further connect Crafting to your GitOps process and allow sandbox created from Pull Request more easily and automated. * [Advanced Setup](https://docs.sandboxes.cloud/docs/advanced-setup): This includes following advanced setup topics: * [Home screen message and sandbox instruction](https://docs.sandboxes.cloud/docs/home-screen-message-and-sandbox-instruction): This help you educate developers in your team about using the sandbox and provide short cuts for them * [Secrets for storing dev credentials](https://docs.sandboxes.cloud/docs/secrets): This provides instructions on how to store dev credentials securely. * [Endpoint alias and endpoint routing](https://docs.sandboxes.cloud/docs/endpoint-alias): This is helpful for setting up sandbox for third party integration with callbacks and webhooks. * [Organizational settings](https://docs.sandboxes.cloud/docs/org-settings): This provides additional settings for you to manage the sandboxes better. * [Setup for Kubernetes](https://docs.sandboxes.cloud/docs/kubernetes-setup): This is Kubernetes specific setup, which is important if you are using Kubernetes * [Setup for Cloud Resources](https://docs.sandboxes.cloud/docs/cloud-resources-setup): This is to connect your sandbox with cloud resources from cloud providers such as AWS, GCP, which is important if services in your dev environments uses cloud native resources, such as SQS or Lambda ## System Maintenance Crafting is designed to minimize the maintenance burden from administrators. If you are using Crafting SaaS, since it's a fully managed solution, there is zero maintenance effort on your side. If you are using Crafting Self-Hosted, our *managed self-hosting* solution will do automatic upgrading of the system and node-pool management, with permissions given to us for operating the system in your cloud account. We will also do active monitoring of the system health and respond to issues on the deployed site. --- # Advanced Setup Source: https://docs.sandboxes.cloud/docs/advanced-setup.md Here, we discuss advanced setups Crafting platform supports for setting your team's dev environments, including: * [Home screen message and sandbox instruction](https://docs.sandboxes.cloud/docs/home-screen-message-and-sandbox-instruction) * [Secrets for storing dev credentials](https://docs.sandboxes.cloud/docs/secrets) * [Endpoint alias and endpoint routing](https://docs.sandboxes.cloud/docs/endpoint-alias) * [Organizational settings](https://docs.sandboxes.cloud/docs/org-settings) --- # Auto-follow code branch in sandbox Source: https://docs.sandboxes.cloud/docs/auto-follow.md In this page, we describe how to use the `auto-follow` feature in Crafting sandbox for your development and preview. Crafting allows developers to turn some workspaces into `AUTO` mode, where it would periodically check the Git repo to see whether there is a new version of the code available for that branch. If so, it would pull the new version of the code to the workspace and rerun hooks for building the code and restarting the service. That way, the service running on the workspace in `AUTO` mode is always up-to-date. Note that once the workspace is in `AUTO` mode, developers should not edit the code manually there, because all the edit will be discarded and may potentially interfere with the automation. We recommend turning off `AUTO` mode before editing code and debug in the workspace. ## Turn on/off Auto mode for a workspace To turn on auto mode for an existing sandbox, we can simply turn on the toggle on the workspace from the sandbox page as highlighted follows. Auto mode can also be controlled via CLI using the following command. ```shell cs mode [auto|manual] -W ```text The `-W` option is not needed when running the `cs mode` command in the target workspace. The `AUTO` mode can also be selected for a newly created sandbox, by toggle the switch in the customization page, as shown below ## Use cases for Auto mode There are several key use cases for Auto mode: * **To keep PR preview sandbox up-to-date**: With Auto mode, the code in the target workspace will be kept up-to-date for the branch of the PR, which means it would automatically reflect new commits pushed into the branch. * **To support hybrid development with code sync**: Auto mode can be used to sync code from local machine to cloud workspaces in a more automated way. Please see [Code sync for hybrid development](https://docs.sandboxes.cloud/docs/code-sync) for details. * **To have a sandbox always following master or staging**: We can turn on Auto mode for all workspaces to let them follow a specific branch such as `master` or `staging`. That way, we have a place for developers to check the latest flow. It's also a reasonable practice to `pin` such sandbox to make it always on. * **To keep dependencies code up-to-date**: We can turn on the Auto mode to let all other services except the target service we work on follow the master branch. This way, we always get an up-to-date context in our dev environment. --- # Basic Steps Source: https://docs.sandboxes.cloud/docs/basic-steps.md In this section, we describe the basic steps for using Crafting Sandbox in your day-to-day work. We will go through the following: * [Launch a sandbox](https://docs.sandboxes.cloud/docs/launch-a-sandbox): how to start a new sandbox with your code and configuration, using a simple version of a demo app to illustrate the process. * [Work on a sandbox](https://docs.sandboxes.cloud/docs/work-on-a-sandbox): basic topic on how to work on sandbox, including: write code, run commands, see preview, see logs, etc. * [Use command line tool](https://docs.sandboxes.cloud/docs/use-command-line-tool): go over some basics of our command line tool, `cs`. * [Code with VS Code](https://docs.sandboxes.cloud/docs/code-with-vs-code): common topics regarding using VS Code to work on code inside the Crafting Sandbox. * [Code with JetBrains IDEs](https://docs.sandboxes.cloud/docs/code-with-jetbrains-ides): common topics regarding using JetBrains IDEs, such as `IntelliJ`, `RubyMine`, `PyCharm`, `GoLand`, `WebStorm`, `CLion`, etc. to work on code inside the Crafting Sandbox. * [Suspend and resume](https://docs.sandboxes.cloud/docs/suspend-and-resume): how to suspend and resume a sandbox. For more advanced topics, please check [Advanced Topics](https://docs.sandboxes.cloud/docs/advanced-topics) --- # Setup for Cloud Resources Source: https://docs.sandboxes.cloud/docs/cloud-resources-setup.md Crafting can manage the lifecycle of resources outside the system, like services from the cloud providers (e.g. RDS, SQS on AWS, PubSub on GCP), as all-in-one, self-contained dev environments. The lifecycle management provides hooks to provision/unprovision resources during sandbox creation and deletion, and optionally scale in/up during sandbox suspension and resuming. This section walks through the one-time setup and a few examples of using the sandbox lifecycle. For the user guide on how a developer to use this setup to develop with Cloud resources, please see [Develop with cloud resources](https://docs.sandboxes.cloud/docs/cloud-resources-dev). Specifically, the outline of this page: * [Access Setup](#access-setup) * [How identity federation works](#how-identity-federation-works) * [AWS guide](#aws-guide) * [GCP guide](#gcp-guide) * [Setup Per-Sandbox Cloud Native Resources](#setup-per-sandbox-cloud-native-resources) * [Prepare Provision Scripts](#prepare-provision-scripts) * [Define resources in sandbox](#define-resources-in-sandbox) * [Share the template](#share-the-template) * [Advanced Topics](#advanced-topics) * [Details about the resources](#details-about-the-resources) * [Suspend and resume](#suspend-and-resume) * [Restrict Access to Workspaces and Secrets](#restrict-access-to-workspaces-and-secrets) ## Access Setup In most cases, access setup is needed to manage resources from the cloud provider. It's recommended using **identity federation** to setup the access without persisting sensitive information. Alternatively, you can also store credentials using `Secrets` (see [Secrets for storing dev credentials](https://docs.sandboxes.cloud/docs/secrets)) ## How identity federation works The Crafting system can be registered as an OIDC (OpenID Connect) provider in the Cloud IAM as an identity provider. After that, bind that with Roles on AWS or Service Accounts on GCP, so the identity from the Crafting system can access the cloud under the service account or role. ## AWS guide 1. Add an *Identity Provider* to IAM: from IAM, add an *Identity Provider* of type `OpenID Connect`, with the following information: * Provider URL: `https://sandboxes.cloud` (or the **site URL** for self-hosted Crafting); * Audience: Your **org name** in the Crafting system; 2. Assign a role: add `AssumeRole` policy (aka *Trust relationships*) like the following on the designated role. The `` is the host name in the provider URL, e.g. `sandboxes.cloud`; the `` must be lower cased. ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam:::oidc-provider/" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { ":aud": [ "" ] } } } ] } ``` 3. In the sandbox, use the following content as `$AWS_CONFIG_FILE`: ```ini [default] region = credential_process = idfed aws ``` It is recommended to save that as a secret and add environment variable in the Template: ```shell cs secret create --shared aws-config -f config-file ``` Add the following entry to the sandbox level `ENV` or per-workspace: ```yaml env: - AWS_CONFIG_FILE=/run/sandbox/fs/secrets/shared/aws-config ``` 4. It's also possible to attach the above `AssumeRole` policy to more than one roles, and use *profiles* in the `$AWS_CONFIG_FILE` to specify different roles for different processes: ```ini [default] region = credential_process = idfed aws [profile role1] region = credential_process = idfed aws [profile role2] region = credential_process = idfed aws ``` With above setup, all sandbox users can use AWS CLI from workspaces to directly access AWS account. Use `AWS_PROFILE` environment variable before launching a process so the process can run under the corresponding role. To quickly validate the setup, run the following command: ```shell aws sts get-caller-identity ``` ## GCP guide 1. Add an *Identity Provider* to IAM: this can be done from IAM/Workload Identity Federation menu on the console. With Google Cloud SDK, using the following command: ```sh gcloud iam workload-identity-pools create ${POOL_ID} --location=global gcloud iam workload-identity-pools providers create-oidc ${PROVIDER_ID} \ --issuer-uri="https://sandboxes.cloud" --allowed-audiences=${SANDBOX_ORG} \ --attribute-mapping="google.subject=assertion.sub" \ --workload-identity-pool=${POOL_ID} --location=global ``` 2. Bind to a service account (can be multiple service accounts): ```sh gcloud iam service-accounts add-iam-policy-binding --role roles/iam.workloadIdentityUser \ --member "principalSet://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${POOL_ID}/*" \ ${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com ``` 3. In the sandbox, use the following content for the file pointed by `$GOOGLE_APPLICATION_CREDENTIALS` ```json { "type": "external_account", "audience": "//iam.googleapis.com/projects//locations/global/workloadIdentityPools//providers/", "subject_token_type": "urn:ietf:params:oauth:token-type:jwt", "token_url": "https://sts.googleapis.com/v1/token", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/@.iam.gserviceaccount.com:generateAccessToken", "credential_source": { "file": "/run/sandbox/fs/metadata/1000/token", "format": { "type": "text" } } } ``` Specifically, for accessing a GKE cluster, use the following as the user credential in the `kubeconfig` file: ```yaml apiVersion: v1 kind: Config ... users: - name: foo user: exec: apiVersion: client.authentication.k8s.io/v1beta1 command: idfed args: - gke ``` With above setup, the processes in the sandbox should be able to access the GCP project (and GKE clusters), except some old client libraries which don't support GCP Workload Identity Federation. It's recommended to save the above config files as secrets, and add environment variables to the App or per-workspace (assume the secrets are `gcp-account.json` and `kubeconfig`: ```yaml env: - GOOGLE_APPLICATION_CREDENTIALS=/run/sandbox/fs/secrets/shared/gcp-account.json - CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE=$GOOGLE_APPLICATION_CREDENTIALS - KUBECONFIG=/run/sandbox/fs/secrets/shared/kubeconfig ``` To quickly validate the setup, run: ```shell gcloud auth print-access-token ``` It's not necessary to use `gcloud login` as it will save a user login credential into home directory. ## Setup Per-Sandbox Cloud Native Resources ## Prepare provision scripts After [Access Setup](#access-setup), a developer is able to access the API of cloud provider and use CLI tools to manually create resources. A sandbox workspace is a good place to develop the resource provisioning scripts. The sandbox lifecycle hooks use general shell commands to manage resources, so any tools can be used for this purpose. Terraform is a very popular tool for this purpose and it's highly recommended. Crafting providers a simplified configuration for hooking up Terraform into the sandbox lifecycle with additional features like visualizing the state. ## Define resources in sandbox Once the scripts are ready, define the resources in a sandbox template, like: ```yaml workspaces: - name: dev checkouts: - path: src ... resources: - name: aws brief: Dev Resources on AWS handlers: on_create: use_workspace: dev name: dev run: dir: src cmd: ./scripts/provision.sh on_delete: use_workspace: dev name: dev run: dir: src cmd: ./scripts/unprovision.sh ``` The `resources` list defines one or more groups of resources to be managed in the sandbox lifecycle, and they are provisioned/unprovisioned independently. For the full reference, please read [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition#resources). The `handlers` specifies the scripts to run in specific lifecycle events, like sandbox creation (`on_create`), sandbox deletion (`on_delete`). Each handler will run the script using a workspace, so the scripts can be managed in source control, automatically checked out in a workspace during sandbox creation and leverage all tools from the workspace snapshot. The `run.dir`specifies a path related to the home directory in the workspace as the working directory, and `run.cmd` specifies the actual command to run. This can be a single command or multi-line shell script. When using Terraform, the configuration can be simpler: ```yaml workspaces: - name: dev checkouts: - path: src ... resources: - name: aws brief: Dev Resources on AWS terraform: workspace: dev dir: deploy/tf run: timeout: 600s vars: instance_type: 't2.micro' ``` The configuration specifies the location of the main Terraform module in the workspace and the sandbox knows how to do with it: * During sandbox creation, it will run `terraform init` and `terraform apply`; * During sandbox deletion, it will run `terraform destroy`. As the lifetime of the resources is aligned with the sandbox, the terraform state should be saved in the same folder in the workspace, and the Crafting system will be able to visualize the state from that file. ## Share the template Save the above configuration as a Template and test with a new sandbox. Once everything looks good, the Template can be shared with other developers. With a single click, a sandbox brings up a full, self-contained dev environment. ## Advanced Topics ## Details about the resources A resource is displayed as a *Card* similarly to other workloads, like workspaces, dependencies, containers etc. When clicking on the *Card*, it opens up a detailed view. The author of the Template is able to provide customized details in the view to help developers know better about what have been provisioned, even with convenient links to open external URLs to access/manage the resources. To do that, specify `save_state: true` in the handlers and add a `details` property with a markdown template: ```yaml workspaces: - name: dev checkouts: - path: src ... resources: - name: aws brief: Dev Resources on AWS details: | ## Provisioned Resources - [RDS]({{state.rdsUrl}}) handlers: on_create: save_state: true use_workspace: name: dev run: dir: src cmd: ./scripts/provision.sh on_delete: use_workspace: name: dev run: dir: src cmd: ./scripts/unprovision.sh ``` With `save_state: true`, the STDOUT of the script is expected to be JSON and used as the context (referenced as `state`) to render the markdown template in the `details` field. For example, the output of `./scripts/provision.sh` is like: ```json {"rdsUrl":"https://rds.awsamazon.com/someid"} ``` The above template will be rendered as (substituting `{{state.rdsUrl}}`: ```markdown ## Provisioned Resources - [RDS](https://rds.awsamazon.com/someid) ``` It's important to make sure non-JSON outputs are written to STDERR rather than STDOUT. When using Terraform, Crafting will retrieve the output (`terraform output`) in JSON and use that as context, for example: ```yaml workspaces: - name: dev checkouts: - path: src ... resources: - name: aws brief: Dev Resources on AWS details: | EC2 Instance: {{state.instance_id}} terraform: save_state: true workspace: dev dir: deploy/tf run: timeout: 600s vars: instance_type: 't2.micro' ``` In addition to the `details`, Crafting will also show the state of each Terraform resource. ![image](https://files.readme.io/e61075e-TerraformStateViz.png) The saved state can also be referenced in the sandbox summary template, prefixed by resource name. For example: ```yaml summary: | This sandbox contains an EC2 instance: - {{resources.aws.state.instance_id}} workspaces: - name: dev checkouts: - path: src ... resources: - name: aws brief: Dev Resources on AWS details: | EC2 Instance: {{state.instance_id}} terraform: save_state: true workspace: dev dir: deploy/tf run: timeout: 600s vars: instance_type: 't2.micro' ``` ## Suspend and resume The resource handlers can optionally take advantage of sandbox suspend and resume events to further optimize the cost of cloud resources, by defining `on_suspend` and `on_resume`: ```yaml workspaces: - name: dev checkouts: - path: src ... resources: - name: aws brief: Dev Resources on AWS handlers: on_create: use_workspace: name: dev run: dir: src cmd: ./scripts/provision.sh on_delete: use_workspace: name: dev run: dir: src cmd: ./scripts/unprovision.sh on_suspend: use_workspace: name: dev run: dir: src cmd: ./scripts/suspend.sh on_resume: use_workspace: name: dev run: dir: src cmd: ./scripts/resume.sh ``` If the cloud resources are stateless, the same scripts of `on_create` and `on_delete` can be used for `on_suspend` and `on_resume`. Please note, if `save_state: true` is specified in `on_resume`, it will overwrite the state generated by `on_create`. Here's the example of Terraform: ```yaml workspaces: - name: dev checkouts: - path: src ... resources: - name: aws brief: Dev Resources on AWS terraform: save_state: true workspace: dev dir: deploy/tf run: timeout: 600s vars: instance_type: 't2.micro' on_suspend: vars: instance_count: '0' ``` If `on_suspend` is specified, `terraform apply` will be used with additional configuration (for example the `instance_count` variable above) during sandbox suspension. `on_resume` is implicitly enabled by using `terraform apply` during sandbox resuming. ## Restrict Access to Workspaces and Secrets In some cases, cloud resources are provisioned according to the sandbox lifecycle, while for development work, the developers are not granted permissions for creating/updating/removing cloud resources. Basically, the *write* permission is only used during the lifecycle events, and developers should have *read only* permissions when accessing the cloud resources. To achieve this, different configurations can be used. For example, use different secrets for different cloud roles/service accounts so they have different permissions. To prevent unintentional use of the wrong identity, the access to the secret for privileged cloud identity can be restricted as *Admin Only*. Those secrets are no longer mounted in regular workspaces, and they can't be referenced in environment variables either. They are only mounted when the workspace is in the *Restricted* mode: When a workspace is in the *Restricted* mode, only org admin is allowed to *access* (including SSH, Web Terminal, WebIDE, Remote Desktop etc.) the workspace. Even the owner of the sandbox will be denied to access if the owner is not an org admin. In this mode, the secrets with access restriction set to *Admin Only* will be mounted in the folder for shared secrets. The *Restricted* mode can only be enabled when a workspace is being created, and it can have one of the 2 life time settings: 1. Start up: the workspace is created in *Restricted* mode, and it can exit the *Restricted* mode at any time requested by any user who has the *Update* permission to the sandbox. Once the workspace exits the *Restricted* mode, the secrets with access restriction set to *Admin Only* will be unmounted. And the workspace can no longer get back into *Restricted* mode again; 2. Always: the workspace is created in *Restricted* mode and it will never exit the *Restricted* mode. The *Restricted* mode is defined in the Sandbox Definition: ```yaml workspaces: - name: example ... restriction: life_time: STARTUP # or ALWAYS if exit is not allowed ``` After the workspace is created, the *Restricted* mode is permanent. The workspace will not accept new settings from Sandbox Definition changes. Here are 2 common practice using the workspace *Restricted* mode: 1. Start the workspace in *Restricted* mode with `life_time = STARTUP`. So the setup script is able to access secrets with access restriction set to *Admin Only*. As these secrets represent privileged cloud identity, the setup script is able to update/change cloud resources. Once done, the script will run `cs sandbox restriction disable $SANDBOX_WORKLOAD` to exit the *Restricted* mode, so the developer is able to access the workspace; 2. Define a dedicated workspace always in *Restricted* mode (`life_time = ALWAYS`). Usually this workspace is dedicated to run the lifecycle event handlers of resources, as it has access to the secrets with access restriction set to *Admin Only*. And it's not necessary for the developers to use this workspace. > 🚧 Not for security purpose > > This feature is designed for convenience and avoid unintentional mistakes. It's not designed for protecting access to sensitive information in secrets as part of security requirements. --- # Code with JetBrains IDEs Source: https://docs.sandboxes.cloud/docs/code-with-jetbrains-ides.md In this page, we cover the common topics regarding using JetBrains IDEs, such as `IntelliJ`, `RubyMine`, `PyCharm`, `GoLand`, `WebStorm`, `CLion`, etc. to work on code inside the Crafting Sandbox. ## Directly edit code in sandbox using JetBrains Gateway and Client JetBrains offers a suite of powerful and feature-rich IDEs for common programming languages. For remote development, it offers `JetBrains Gateway` on the client side to work with IDE backend running on the server. After the `Gateway` sets up IDE backend, it launches an IDE frontend `JetBrains Client` to connect with the backend via SSH. Crafting Sandbox supports JetBrains Gateway natively with `Crafting Plugin`. It can be installed by simply run `cs jetbrains` command. ```shell cs jetbrains Downloading JetBrains Gateway ... Downloading Crafting plugin Unpacking Crafting plugin ... ```text It will download the latest version of JetBrains Gateway, and install the Crafting Plugin into the downloaded version. After that, it will automatically connect to the workspace you selected and launch `JetBrains Client` to edit the code inside the sandbox. In this way, the IDE backend will run on the workspace on cloud, taking the heavy-lifting work of indexing the code, build, and run tests, etc., and a generic IDE frontend (JetBrains Client) will run on your local machine providing the native editing experience with GUI. You can select which IDE backend to launch on the workspace to match the corresponding language you want to edit, like the following, by default it would use IntelliJ ```shell cs jetbrains --ide=IntelliJ cs jetbrains --ide=PyCharm cs jetbrains --ide=RubyMine cs jetbrains --ide=GoLand cs jetbrains --ide=WebStorm cs jetbrains --ide=CLion ```text Note that if the corresponding IDE backend is used first time on the workspace, it needs to be downloaded and installed, which will take a few minutes. We suggest you to pre-install the IDE into corresponding location on the workspace, i.e., under `~/.cache/JetBrains/RemoteDev/`, and include the directory into `Home Snapshot`(see [here](https://docs.sandboxes.cloud/docs/workspaces-setup#home-snapshots)) or `Personal Snapshot` (see [here](https://docs.sandboxes.cloud/docs/personalize)) so that it's loaded into any new sandbox you created. Alternatively, you can also use a command to launch the `JetBrains Gateway` with GUI on your local machine and use the installed Crafting Plugin. ```shell cs jetbrains --gateway ```text This way, you can select which workspace, which IDE you want to use in the GUI. Or in the address bar, simply paste the corresponding WebIDE link and click `Connect`. It will also save the most recent connected workspaces in the list to help you quickly connect to them. ## Use JetBrains IDE on local codebase in hybrid mode If you prefer using your desktop version of IDE (e.g. `IntelliJ`) directly instead of using `JetBrains Client`, Crafting supports two ways for `hybrid development` as follows: * Code locally, build and run remotely with code sync: see [here](https://docs.sandboxes.cloud/docs/code-sync) * Code and run one service locally, with context services on remote with port forwarding: see [here](https://docs.sandboxes.cloud/docs/port-forwarding) ## Customize JetBrains Remote Server Version Use the `customizations` section in the Template to specify the desired version of JetBrains remote server, for example: ```yaml workspaces: - name: example ... customizations: - property_set: type: crafting.dev/sandbox/jetbrains properties: workspace: example ide_code: IU ide_version: 2024.1.1 ide_folder: ideaIU-2024.1.1 ```text The properties are: * `workspace`: required, name of the workspace. The version selection will only be applied to the specified workspace; * `ide_code`: required, match the specified IDE. It uses the JetBrains defined code, some of them are like: * `IU`: IntelliJ * `PC`: PyCharm * `WS`: WebStorm * `ide_folder`: optional, but recommended, the folder name of the remote server installation. ## Prelaunch JetBrains Remote Server To reduce the start time of `cs jetbrains`, the remote server matching the specified version can be preinstalled in the base snapshot and prelaunched during workspace startup. Start from Crafting `1.8.3`, the remote dev server can be launched easily using (inside a Crafting workspace): ```shell cs jetbrains remote-dev-server run ${PROJECT_DIR} ```text For older versions, the remote server can be launched (directly using `remote-dev-server.sh` may run into some race-condition that JetBrains didn't resolved well. So using `cs jetbrains remote-dev-server ...` command will apply some workarounds internally to avoid race-conditions): ```shell BROWSER='/opt/sandboxd/sbin/wsenv open' nohup \ ~/.cache/JetBrains/RemoteDev/dist/ideaIU-${INTELLIJ_VERSION}/bin/remote-dev-server.sh \ run ${PROJECT_DIR} \ --ssh-link-host ${SANDBOX_WORKSPACE}--${SANDBOX_NAME}-${SANDBOX_ORG}${SANDBOX_SYSTEM_DNS_SUFFIX} \ > ${LOG_FILE} & ```text Where `ideaIU-${INTELLIJ_VERSION}` matches the `ide_code`, `ide_version` and `ide_folder` in the customizations section. To auto launch the JetBrains remote server, add that as a daemon to a checkout of the workspace, for example: ```yaml workspaces: - name: example checkouts: - path: src repo: ... manifest: overlays: - inline: | ... daemons: remote-dev-server: run: cmd: cs jetbrains remote-dev-server run ```text Note: stopping the daemon may not stop the remote-dev-server in the background if there's any active client session connected. To forcibly stop the remote-dev-server while stopping the daemon, please add flag `--terminate-dev-server` to the command line. ## Warm-up Index The first launching the remote server on a code repository takes a bit longer indexing the source code. This can be explicitly done during workspace startup to save time when a client connects. Starting from Crafting `1.8.3`, run the following command to warm-up the index explicitly: ```shell cs jetbrains remote-dev-server warmup ${PROJECT_DIR} ```text With older version, run the following command (note: it may run into race-conditions as mentioned above about prelaunch):. ```shell ~/.cache/JetBrains/RemoteDev/dist/ideaIU-${INTELLIJ_VERSION}/bin/remote-dev-server.sh warmup ${PROJECT_DIR} ```text And this command can be added to `post-checkout` hook or run before the remote server starts as a daemon. ## Troubleshootings #### Unable to launch IDE client (MacOS only) When this happens, the UI may show the progress of downloading the IDE thin client (or may not if it's already downloaded), and after that, no IDE UI is being launched. From the terminal, some log may show up like: ```text WARN - #c.i.r.d.CodeWithMeClientDownloader - Running client process failed after specified number of attempts ```text If info level log was enabled, it will show something like `error=Error Domain=NSOSStatusErrorDomain Code=-10661` This is caused by the `cs` CLI and/or the JetBrains Gateway for a different CPU architecture was downloaded and the gateway will download the IDE thin client for the wrong CPU architecture. A clean fix will be: * Remove the `cs` binary * Run `rm -fr ~/.crafting/sandbox/cli` * Run `rm -fr ~/.crafting/sandbox/jetbrains` * Download and install the `cs` binary for the correct CPU architecture Then run `cs jetbrains` again and see if the issue is resolved. --- # Code with VS Code Source: https://docs.sandboxes.cloud/docs/code-with-vs-code.md In this page, we cover the common topics regarding using VS Code to work on code inside the Crafting Sandbox. ## Launch desktop VS Code and connect to sandbox The Web IDE come with Crafting Sandbox is the open-source version of VS Code, which offers a near native experience like the desktop VS Code. But if you prefer, you can use the desktop version of VS Code you have already installed on your local machine to directly code in sandbox. Simply run `cs vscode`, and select the workspace you want to connect to, it will launch the VS code and establish a remote coding environment via SSH. ```shell cs vscode ```text Keep in mind that for this feature to work, the `code` command to launch VS Code needs to be in your `PATH`, which may not be default for **MacOS** users. ```shell export PATH="$PATH:/Applications/Visual Studio Code.app/Contents/Resources/app/bin" ```text With native IDE on your local machine, all the convenient settings and customizations are already set, and the extensions are already installed. So you can get a more familiar coding environment to have best productivity. The VS Code supports remote coding by splitting the IDE functionality into frontend and backend. With `cs vscode`, the IDE frontend (such as editor) runs on your local machine, while the IDE backend (such as language index, code analytics, etc.) runs on the sandbox. The SSH connection is used in between. The embedded terminal on the IDE is also on the remote sandbox, which is convenient for you to run `git` commands at where the code is. Some features in VS Code require remote settings, and some extensions require remote installation, please see [below](#setup-appropriate-extensions-on-sandbox) for more information. #### How to set up WSL to enable launching VS Code from WSL and connect to sandbox? Windows Subsystem of Linux (WSL) is a new platform to provide Ubuntu experience on Windows 10/11 platform. It is getting more and more popular with developers. Visual Studio Code supports WSL seamlessly by running the IDE in windows while using "remote development support" to connect to code in Linux subsystem. Although the CLI `cs` works normally in WSL, it does not work with native Windows platform yet, therefore `cs vscode` does not work out-of-the-gate directly in WSL. However, with minimal configuration, it can be setup properly. The reason for the problem is that **since VS Code launched in WSL is running in native Windows, it will use the default ssh client in Windows and configurations there to connect to Sandbox, thus making the setup done by`cs` ineffective**. To work around it, we need to **let VS Code use ssh config in WSL itself**. To do that, we need to do following steps 1. First, do `cs ssh` in WSL to connect to the target workspace for the proper ssh setup to be created in WSL. 2. Create a `ssh.bat` executable batch file and put it somewhere in the Windows File System, e.g. `C:\Users\\.ssh` with one line content `C:\Windows\system32\wsl.exe ssh %*` 3. Install "Remote - SSH" (ms-vscode-remote.remote-ssh) plugin in the VS Code if not already done so. 4. Edit settings in VS Code, "Settings" -> "Extensions" -> "Remote - SSH" -> "Remote.SSH: Path", set it to your `ssh.bat` location, e.g. `C:\Users\\.ssh\ssh.bat`. Or if you prefer to directly modify settings in JSON, add: `"remote.SSH.path": "C:\\Users\\\\.ssh\\ssh.bat"` in the json config. After that, the `cs vscode` in the WSL will connect properly to sandbox, remember to select `Linux` as the remote platform. ## Setup appropriate extensions on sandbox To have best productivity, we usually need to config the extensions properly for our IDE. No matter using Web IDE or desktop VS Code, sometimes the extensions need to be installed on IDE backend, at where the code is. It means that sometimes even if you have installed the extension locally on your desktop VS Code, you may need to reinstall it on as remote extension on sandbox. The VS Code extensions you installed on sandbox are located under your home directory in the workspace * Desktop VS Code extensions: `~/.vscode-server/` * Web IDE extensions: `~/.vscode-remote/` To avoid the effort of reinstalling it manually every time for a new sandbox, you can include these directories in `Home Snapshot` for the team to share (see [here](https://docs.sandboxes.cloud/docs/workspaces-setup#home-snapshots)) or your `Personal Snapshot` just for yourself (see [here](https://docs.sandboxes.cloud/docs/personalize)) --- # Command Line Tool Source: https://docs.sandboxes.cloud/docs/command-line-tool.md ## One CLI The only CLI `cs` can be used outside the Crafting Sandbox system (e.g. your laptop, local machine) or inside a workspace, where the CLI is already installed. Type `cs` or `cs help` to view the list of sub-commands and flags. Use `cs COMMAND --help` (or `cs help COMMAND`) to get help of a specific command. ## Output and Interactive Mode Based on the terminal, the CLI will automatically determine whether to support colored output and enable interactive mode. The CLI outputs all human-readable sentences on *STDERR* as well as error messages. It reserves *STDOUT* for structured data output, like JSON or YAML. So when piping the output, make sure the output is in JSON or YAML (some commands always output JSON or YAML, e.g. `cs template show --def`). Colored output can be turned off by setting environment variable `CLI_NO_COLOR` to a non-empty value, regardless of the actual value. When interactive mode is enabled, some command line arguments can be omitted or incomplete. The CLI will interactively prompt the user to select or enter required information. And text editor (used by `cs template create`, `cs template edit` etc) will only be launched in interactive mode. The interactive mode is disabled in any of the following cases: * The terminal is not a TTY; * *STDIN* is closed; * Environment variable `CLI_SCRIPT` is not empty, regardless of the value; * Command line flag `--output-format` (or `-o`) is specified. Note: this will disable interactive mode and assume the CLI is used by a script for piping input/output. ## Commands ## Login Create an authenticated session for the CLI and other clients. ```shell cs login █████████████████████████████████████████ █████████████████████████████████████████ ████ ▄▄▄▄▄ █▀█ █▄▄█▀ ▀▀▄▀▀█▄▀█ ▄▄▄▄▄ ████ ████ █ █ █▀▀▀█ ▀▀ ▀███▄▄████ █ █ ████ ████ █▄▄▄█ █▀ █▀▀█▄▀▄█ ▄▀█▀▄██ █▄▄▄█ ████ ████▄▄▄▄▄▄▄█▄▀ ▀▄█▄█▄█▄▀▄▀ █▄█▄▄▄▄▄▄▄████ ████ █▄▄▄▄▀▄▀▄▄▄ ▄▀▀▀█▄▀█▄▀▄█▄▀ ▀████ █████▄▀█ ▄███▄█▀█ ▄▄█ ▄█▀ █ ▀▄▀▄█████ █████▄▄▀▄█▄ ▀ ▄█▄▄ █▀▄▄ ▄█▄▀▄ ▀█▄▄█▀▀████ ████▄ ▀ █▄ ▀ ▄█ ▄██ ▀ ▀█▀▀█ █▀▄█████ ████▄ ▄ ▄█▄ █▄▄▄▀ ▄▄ ▄▄▀▀▀▄ █▀▄▄▀█ ▀████ ████▄▄█ ▀ ▄ ▀███▀█ ▄▄▄▀█▀▀ █▄▄ ▄▄▀▄█████ ████▀▄██ ▀▄█ █▄█▄▄▄▄▄▄█▀ █▄▀▄▄▀▄▄▀█ ▀████ ████ ██▄▄▄▄ ▄▀█ ▄▄ ▀▄▀█▄ ▀██▀▀▄█ ▀▄█████ ████▄█▄██▄▄▄ █▄▄▀▀█▄▄▄█▀▀█▄▀ ▄▄▄ ▄▀█ ████ ████ ▄▄▄▄▄ █▄▄ █▀ ▄ ▄███▀▀▄ █▄█ ▀ █████ ████ █ █ █ ▄▄█▄█▄▄ ▀▀ █▀▀▄ ▄ █▀██████ ████ █▄▄▄█ █ ▄▄ ▄█▀ ▀█ ▀▄▀ ▄█▀ ▀▀ ▄█████ ████▄▄▄▄▄▄▄█▄▄█▄███▄▄▄██▄█▄▄▄███▄██▄█████ █████████████████████████████████████████ ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ Login with: https://sandboxes.cloud/auth/login?state=c%3Ac76vuhtq43umrqhpeumg ```text It will print the login URL (and a QRCode if you want to scan and login from a phone). Visit the URL in browser to complete login process. ##### Flags * `-t`: followed by a login token to login with it. See [Service Account and Login Token](#service-account-and-login-token) for more details. Remember that if you are using self-hosted Crafting, please specify first `CRAFTING_SANDBOX_SERVER_URL=https://your.site.address` ## Info Display detailed information about the current client. ```shell cs info USER Demo Me Email: demo.me@crafting.dev AUTHORIZED SSH KEYS FINGERPINT COMMENT CURRENT SHA256:M0JDLwPWPnuIixDDIRbxDCwoTgRmbj1YAZqbFZQLqyI dev * SECRET default-ssh-0 Version: 16aa9d024b577309 OwnedBy: demo.me@crafting.dev UpdatedAt: 03 Nov 12:47:19 CreatedAt: 03 Nov 12:47:19 CreatedBy: demo.me@crafting.dev Type: SSHKey State: READY SSH Authorized Key ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOLsrOAKpIIP/yDQhks70RbPmmsdPFz/czxD99vnHLuybY4koRecq3N9mpC9zj67kla0bX0yjqJSaUAkeb+sPzC+2VNdvpjnUhOqEmSDwflyVDz+3Q6h+5M4gSIbr6L79KK/UrG728lp8EZwWQW1RPNvzjAs26y+yZ7oOT420FISSNT2KBvQYJSVI5X1tQOexHtfwtdcmzpCgr96lq0H0T7dQ6ZjuTggw9ScEDPtR+XYr/16KoD5Bf8TVcqbRi1ACxob2yCm8raxtJl7b/VEY2HNl1AOT42zzacVkP/IrsTYoeMjlteipQ/LAq3i5SX8vGCK4bU83GIoE1jjoMkOKA+UlfI4dUeUhQisTvRFM3rvFGDCmGECbrf/w1auD+fmRcYbZMj/+BicvAS1SkeHFgJz4yDoIYJPx48jT5pmGMVlMSolg78FP07pwg36k3yzIW5k4NFRCztsY9gbHWqFoOGaM5f1lJbjp09ul0GvYtk60WqyvwLcaNQT80RI/QVtM= Fingerprint: SHA256:DSF5PZ+LDK7G4M3gFlp1RpTPDs02GNoPmJmZc8aqoc4 ```text The `AUTHORIZED SSH KEYS` lists SSH authorized keys that can be used to access a workspace. The marker `*` indicate the authorized key used by the current client. The section `SECRET default-ssh-0` provides the public key information of the generated and managed SSH keypair by the Crafting Sandbox system for the current user. This will be used to checkout (and push) code if SSH protocol is specified for a git repository (like `git@...`). If that's the case, this public key must be added to the git source control system (for GitHub, follow the [doc](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account)). ## Org List and show the organization information. ```shell cs org list cs org show ```text ## Template List, show and manipulate Templates. #### Create Create a new Template from [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition). ```shell cs template create NAME DEFINITION-FILE.(json|yaml)|- ```text The [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition) is read from a JSON or YAML file or from *STDIN* (when `-` is specified for `DEFINITION-FILE`). If read from *STDIN*, the format must be explicitly specified using `--format` (or `-f`). If no arguments are specified, and [interactive mode](#output-and-interactive-mode) is enabled, the CLI will ask for the name and launch an editor (from env `EDITOR` or `editor` command) to edit the definition. ##### Flags * `--format`, `-f`: The format of `DEFINITION-FILE`: `json` or `yaml`. Default is to guess from file extension if `DEFINITION-FILE` is a file. ##### Examples ```shell cs template create dev1 def1.yaml cs template create dev1 - --format=yaml Flags * `--template`, `-t`: The name of Template used for creating the sandbox; * `--auto`, `-A`: The name of a workspace to run in *AUTO* mode. Use `*` to put all workspaces in *AUTO* mode; * `--override`, `-D`: Specify override rules: `WORKLOAD/rule=value`, see [rules](#sandbox-override-rules); * `--auth-proxy`, `-p`: Specify [auth proxy](https://docs.sandboxes.cloud/docs/sandbox-definition#http-endpoint) config: `ENDPOINT_NAME=[Y|N][[:A|R PATTERN]...]`: * `Y|N` specified whether auth proxy is enabled or disabled; * Each rule starts with `:` and `A|R` specifies `ACCEPT` or `REJECT`, and `PATTERN` matches email. * `--if-exists=skip`: Do nothing if a sandbox with the same name already exists and exit without failure. * `--wait`: Wait for the sandbox becoming ready. This is default, specify `--wait=false` to not wait. * `--wait-timeout=DUR`: Wait for the sandbox until it's ready or reached `DUR` and ignore errors. ##### Examples ```shell # Create a sandbox using demo Template cs sandbox create demo1 -t demo # Create a sandbox for preview purpose, leave all workspaces in AUTO mode cs sandbox create demo1 -t demo -A '*' # Create a sandbox for checkout an alternative branch and change the home snapshot cs sandbox create demo1 -t demo \ -D 'dev/checkout[src/demo].version=preview1' \ -D 'dev/home=home/preview' # Create a sandbox using a old version of package cs sandbox create demo1 -t demo -D 'dev/package[golang]=1.16.2' # Create a sandbox with a different mysql database name cs sandbox create demo1 -t demo -D 'mysql/property.database=previewdb' # Create a sandbox with mysql pre-populated from a snapshot cs sandbox create demo1 -t demo -D 'mysql/snapshot=mysql/preview/20211105' # Create a sandbox with auth proxy OFF cs sandbox create demo1 -t demo --auth-proxy 'app=N' # Create a sandbox allowing external visitors to access the endpoint cs sandbox create demo1 -t demo --auth-proxy 'app=Y:A v*@guests.com' ```text #### Update Update a sandbox. ```shell cs sandbox update NAME ```text Without additional flags, the sandbox is synchronized with the changes of the current Template. It does nothing if Template hasn't been changed since the creation (or last update) of the sandbox. With flags, the update may alter certain overrides. See below. ##### Flags * `--override`, `-D`: Specify override: `WORKLOAD/rule=value`, see [rules](#sandbox-override-rules); * `--mode`, `-m`: Change workspace mode: `WORKSPACE-NAME=AUTO|MANUAL` (or `a|m`); * `--auth-proxy`, `-p`: Specify [auth proxy](https://docs.sandboxes.cloud/docs/sandbox-definition#http-endpoint) config, see [sandbox creation flags](#sandbox-create-flags); * `--set-all`, `-A`: Ignore existing overrides, and create overrides and workspace modes completely from the command line; * `--no-sync`: when only `--mode` (or `-m`) is specified, do not synchronize the changes from the App, only change the modes. * `--wait`: Wait for the sandbox becoming ready. This is default, specify `--wait=false` to not wait. * `--wait-timeout=DUR`: Wait for the sandbox until it's ready or reached `DUR` and ignore errors. ##### Examples ```shell # Sync the sandbox with it's Template cs sandbox update demo1 # Only change workspace mode to MANUAL cs sandbox update demo1 -m dev=MANUAL --no-sync # Disable auth proxy cs sandbox update demo1 --auth-proxy app=N ```text #### List List sandboxes. ```text cs sandbox list NAME STATE TEMPLATE OWNER UPDATED_AT CREATED_AT CREATED_BY #W #EP demo1 Ready demo demo.me@crafting.dev 01 Dec 04:35:36 01 Dec 04:35:36 demo.me@crafting.dev 5 2 ```text The `STATE` of a sandbox may show up with the following values: * `SettingUp`: some of the workspaces and dependencies are still being configured, setting up (including code checkout and build); * `Ready`: all workspaces and dependencies are working (code checked out, and built, processes launched); * `Problematic`: at least one of the workspaces or dependencies encountered some errors which can be: * Failures during setup, including code checkout, and build; * Some processes are not running; * Readiness probes yield negative results. * `Failed`: this indicates serious problem that the sandbox can't be up. Contact support if you see this; * `Suspended`: the sandbox was suspended (either manually or automatically), use `cs sandbox resume` to resume it. ##### Flags * `--columns`, `-c`: Select columns: `+COLUMN` makes `COLUMN` visible, and `-COLUMN` hides `COLUMN`. Example: `-c +UPDATED_AT,-CREATED_AT`. * `--watch`: Incrementally watch the sandbox updates and refresh the list. It can be used with `-o` to incrementally print changed sandboxes on `STDOUT`. Sandboxes no long shown on the list (e.g. deleted) are printed in the schema like `{ "absent": ["id1", "id2", ...] }`. #### Show Show the details of a sandbox. ```shell cs sandbox show NAME ```text #### Suspend Suspend a sandbox manually. ```shell cs sandbox suspend NAME ```text Suspend will immediately change the sandbox `STATE` to be `Suspended`. However it may take a while for the workspaces and dependencies to freeze. #### Resume Resume a suspended sandbox. ```shell cs sandbox resume NAME ```text Some commands will automatically resume a sandbox if it's suspended. Use this command to manually resume a sandbox. ##### Flags * `--wait`: Wait for the sandbox becoming ready. This is default, specify `--wait=false` to not wait. * `--wait-timeout=DUR`: Wait for the sandbox until it's ready or reached `DUR` and ignore errors. #### Edit Edit the [Definition](https://docs.sandboxes.cloud/docs/sandbox-definition) of the sandbox. If the sandbox is created from a Template, it is detached from the template. That means the sandbox owns its definition and will no longer be able to sync changes from that Template. Also the sandbox will not be able to attach to any Template. ```shell cs sandbox edit NAME ```text ##### Flags * `--from`: read the new definition from the specified file (`-` for reading from STDIN), rather than launching an interactive editor; * `--keep`: if the sandbox was created with additional configurations (workspace AUTO mode, extra environment variables), keep those when applying the new definition. By default they will be discarded; * `--wait`: Wait for the sandbox becoming ready after applying the new definition; * `--wait-timeout=DUR`: Wait for the sandbox until it's ready or reached `DUR` and ignore errors. * `--force`: always continue without confirmation. #### Remove Delete a sandbox. ```shell cs sandbox remove NAME ```text **WARNING**: Sandbox removal is permanent. All data in workspaces and dependencies will be lost and is **UNRECOVERABLE**. ##### Flags * `--force`, `-f`: Force remove without confirmation. #### Sandbox Override Rules These are the rules used by `cs sandbox create` or `cs sandbox update` to override the settings in the [Definition](https://docs.sandboxes.cloud/docs/sandbox-definition) used by the sandbox. The format is `WORKLOAD-NAME/rule=VALUE`. The rules are: * For workspaces: * `checkout[PATH].PROPERTY=VALUE` (alias `co`): override a [checkout](https://docs.sandboxes.cloud/docs/sandbox-definition#checkouts) property: * `repo`: `VALUE` is a string, in the format of `SCHEME:URI`, where `SCHEME` can be `git` or `github`. A list of examples: * `git:git@github.com:org/repo` * `git: * `github:org/repo` * `version_spec` (alias `version`): the value can be one of * a branch name * a tag name * a commit hash * `package[NAME]=VERSION` (alias `pkg`): override the version of a [package](https://docs.sandboxes.cloud/docs/sandbox-definition#packages) to be used; * `portforward[LOCAL]=HOST:PORT` (alias `pf`): override the [local port forwarding](https://docs.sandboxes.cloud/docs/sandbox-definition#local-port-forwarding) rule; * `base=SNAPSHOT_NAME`: override the [base snapshot](https://docs.sandboxes.cloud/docs/sandbox-definition#snapshots); * `home=SNAPSHOT_NAME`: override the [home snapshot](https://docs.sandboxes.cloud/docs/sandbox-definition#snapshots); * `env[KEY]=VAL`: inject/override the environment variable. * For dependencies: * `version=VERSION`: override the [dependency service version](https://docs.sandboxes.cloud/docs/sandbox-definition#dependencies); * `snapshot=SNAPSHOT_NAME`: override the [dependency service snapshot](https://docs.sandboxes.cloud/docs/sandbox-definition#dependencies); * `property.NAME=VALUE` (alias `prop` or `p`): override the named [dependency service property](https://docs.sandboxes.cloud/docs/sandbox-definition#dependencies); * For containers: * `snapshot=SNAPSHOT_NAME`: override the [container snapshot](https://docs.sandboxes.cloud/docs/sandbox-definition#container-snapshot); * `env[KEY]=VAL`: inject/override the environment variable. See [examples](#sandbox-create-examples) above. #### Pin/Unpin Pin a sandbox to be always running (without being automatically suspended). This is useful when a sandbox is used for demo purpose (mostly with AUTO mode on). ```shell cs sandbox pin NAME # Keep sandbox running cs sandbox unpin NAME # Sandbox can be auto-suspended ```text #### Access Control Specify/View the access level of a sandbox. ```shell cs sandbox access private # Set to private mode cs sandbox access shared # Set to shared mode (the default) cs sandbox access show # Show the current setting. ```text Please read [Sandbox Access Control](https://docs.sandboxes.cloud/docs/sandbox-access-control) for more details. #### Workspace Restricted Mode Request the workspace to exit the [restricted mode](https://docs.sandboxes.cloud/docs/cloud-resources-setup#restrict-access-to-workspaces-and-secrets) . ```shell cs sandbox restriction disable WORKSPACE-NAME ```text #### Lifecycle Related Resolve lifecycle hook failures. ```shell cs sandbox lifecycle resolve -S SANDBOX -a ACTION [TARGET[:ACTION]]... ```text Resolve all or specified lifecycle hook failures using the specified `ACTION`. If no `TARGET` is specified, all failures are resolved. If any `TARGET` is specified, only failures on those targets (workspace name, or resource name) are resolved. A `TARGET` can be suffixed with `:ACTION` to resolve using this specific action. An `ACTION` can be one of: * `retry`: run the lifecycle hook again, this is the default; * `skip`: ignore the failure, assume the hook has completed successfully and move the lifecycle transition to the next state; * `abort`: only used for a running lifecycle hook (effectively not a resolution), to abort the execution, and mark the hook as failed due to abort. ## Snapshot Snapshot related commands. #### Create Create a snapshot. ```shell cs snapshot create NAME ```text This command is able to create a workspace base snapshot, home snapshot or dependency service snapshot. See flags below to determine which kind of snapshot is being created. When using this command, the target must be a workspace, or a dependency which supports snapshots. **NOTE**: during dependency snapshot creation, the dependency service will be stopped temporarily. And it will be resumed after snapshot is created. For base/home snapshots, the workspace is still accessible, however please try to avoid writing files to the file system during that procedure. Otherwise, some incomplete files may be included in the snapshot. All snapshots share the same namespace regardless of the type, so it's recommended defining a naming convention to avoid conflicts. One proposal is using a format like `TYPE-NAME-REV`, where `TYPE` is the snapshot type, e.g. `base`, `home`, `mysql` etc. and `NAME` indicates the purpose of the snapshot, while `REV` reflects the revision which can be a date like `YYYYMMDD` or a monotonic version number. For example (not necessary to follow): * Base snapshots are named as `base-NAME-REV`, like `base-backend-r1`; * Home snapshots are named as `home-NAME-REV`, like `home-frontend-20221010`; * Dependency snapshots are named as `SERVICE-TYPE-NAME-REV`, like `mysql-test-2`. Additional prefixes can be added to further separate among sub-teams or persons: * Base snapshots used by a team: `team1-base-frontend-3` * Personal home snapshot: `alan-home-frontend-1` ##### Flags * `--workload`, `-W`: Specify the workload name in the format of `SANDBOX/WORKLOAD`. If the target is a dependency, a dependency snapshot is created. Otherwise, based on `--home` flag to determine if it's a home snapshot or base snapshot; * `--home`: Create a home snapshot. The target must be a workspace; * `--personal`: Create a [Personal Snapshot](https://docs.sandboxes.cloud/docs/personalize#create-a-personal-snapshot); * `--set-personal-default`: Only valid with `--personal` to set the current snapshot as the *Default Personal Snapshot*; * `--force`, `-f`: Overwrite an existing snapshot (if `NAME` already exists) without confirmation. #### List List snapshots. ```shell cs snapshot list ```text ##### Flags * `--columns`, `-c`: Select columns: `+COLUMN` makes `COLUMN` visible, and `-COLUMN` hides `COLUMN`. Example: `-c +UPDATED_AT,-CREATED_AT`. #### Show Show the details of a snapshot. ```shell cs snapshot show NAME ```text #### Restore Restore a dependency snapshot. ```shell cs snapshot restore NAME ```text Only a dependency supporting snapshots can be restored from a snapshot. Base and home snapshots for workspaces are only applied during the workspace creation time, and can't be changed later. **NOTE**: during snapshot restoring, the dependency service will be stopped temporarily. And it will be resumed after snapshot is restored. ##### Flags * `--workload`, `-W`: Specify the workload name in the format of `SANDBOX/WORKLOAD`. The target must be a dependency which supports snapshots. #### Remove Remove a snapshot. ```shell cs snapshot remove NAME ```text **WARNING**: Snapshot removal is permanent. The data in the snapshot is **UNRECOVERABLE**. The workspaces and dependencies with snapshot already applied won't be affected. However new sandboxes may fail to be created if the App/Sandbox is referencing a *deleted* snapshot. ##### Flags * `--force`, `-f`: Force remove without confirmation. #### Personal Personal snapshot related. ```shell cs snapshot personal get-default # Get the current default personal snapshot cs snapshot personal set-default NAME # Set the specified personal snapshot as default. cs snapshot personal set-default NONE # Do not use a personal snapshot. ```text ## Secret Secret related commands. A *Secret* operated by this command is a small piece (a few KB) of opaque data provided by the user. It's not necessary to be sensitive information. And the data is encrypted at storage. A *Secret* has a scope. It's one of: * Personal: belonging to a user, regardless of the orgs; * Private in org: belonging to a member in org. The user can only access own secrets in the context of that org and the secrets can't be accessed by others; * Shared in org: belonging to an org (not a user), and all members in the org has the access to that secret. #### Create Create a secret. ```shell cs secret create NAME ```text A secret is created with *private in org* scope by default unless `--shared` flags is specified. *Personal* secrets can't be created from the CLI. ##### Flags * `--shared`: Create a shared secret in the current org; * `--from`, `-f`: Read content from a `FILE` or `-` (*STDIN*). This flag is required; * `--restricted`: Set the access restriction to *Admin Only* so this secret can only be mounted in workspaces which are running in [Restricted mode](https://docs.sandboxes.cloud/docs/cloud-resources-setup#restrict-access-to-workspaces-and-secrets) . #### List List secrets. ```shell cs secret list ```text This command lists all the secrets the user has access to, including: * Personal secrets; * Private secrets in the current org; * Shared secrets in the current org. ##### Flags * `--user`, `-u`: List personal secrets rather than org scoped secrets; * `--columns`, `-c`: Select columns: `+COLUMN` makes `COLUMN` visible, and `-COLUMN` hides `COLUMN`. Example: `-c +UPDATED_AT,-CREATED_AT`. #### Show Show the details of a secret, without revealing the content. ```shell cs secret show NAME ```text #### Remove Remove a secret. ```shell cs secret remove NAME ```text Only a secret in the current org is to be removed by default, unless `--user` flag is specified. Some secrets (e.g. generated and managed by the system) can't be removed. ##### Flags * `--user`, `-u`: Remove a personal secret rather than the one in the current org; * `--force`, `-f`: Force remove without confirmation. #### Access Restriction Update the access restriction. ```shell cs secret restrict NAME -a MODE ```text Where `MODE` can be one of the values: * `default`: regular secret, shared in org and any member is able to access; * `admin-only`: the access is restricted to admin only, so the secret is only mounted in workspaces which are running in [Restricted mode](https://docs.sandboxes.cloud/docs/cloud-resources-setup#restrict-access-to-workspaces-and-secrets) . ## Dependency Service Retrieve information about dependency services. ```shell cs dependency-service list cs dependency-service show NAME ```text When creating a [Definition](https://docs.sandboxes.cloud/docs/sandbox-definition), it's important to inspect the details of a dependency service using `cs dependency-service show` to figure out: * Exposed ports of the service (name, port number and protocol); * Properties * Available versions ## Tool Packages List available tool packages. ```shell cs package list ```text ## Mode This is a shortcut for setting workspace mode: `AUTO` or `MANUAL`. ```shell cs mode auto cs mode manual ```text ##### Flags * `--workspace`, `-W`: Specify the workspace in the format of `SANDBOX/WORKSPACE`. If unspecified, it will target the current workspace (if the CLI runs inside a workspace), or prompt for a selection. ## SSH Start an SSH session to a workspace. ```shell cs ssh ```text When passing flags to `ssh`, put them after `--`. For example: ```shell cs ssh -- /myscript --script-flag cs ssh -- -t -L 8080:localhost:8080 /myapp ```text ##### Flags * `--workspace`, `-W`: Specify the workspace in the format of `SANDBOX/WORKSPACE`. If unspecified, it will prompt for a selection. ## SCP Run `scp` to copy files to/from a workspace. ```shell cs scp LOCAL-PATH SANDBOX/WORKSPACE:REMOTE-PATH cs scp SANDBOX/WORKSPACE:REMOTE-PATH LOCAL-PATH ```text Similar to `ssh` command, flags passing to `scp` should be placed after `--`, for example: ```shell cs scp -- -r LOCAL-PATH SANDBOX/WORKSPACE:REMOTE-PATH ```text ## RSYNC Run `rsync` between a local folder and a folder in a workspace. ```shell cs rsync LOCAL-PATH SANDBOX/WORKSPACE:REMOTE-PATH cs rsync SANDBOX/WORKSPACE:REMOTE-PATH LOCAL-PATH ```text Flags passed to `rsync` should be placed after `--`. ## SSHFS Mount a path in a workspace to a local directory using `sshfs` which must be installed on the system the CLI runs. ```shell cs sshfs SANDBOX/WORKSPACE:REMOTE-PATH LOCAL-PATH ```text ## Mutagen Run `mutagen` for a two-way sync session between a local directory and one in a workspace. [Mutagen](https://github.com/mutagen-io/mutagen) must be installed on the system the CLI runs. ```shell cs mutagen LOCAL-PATH SANDBOX/WORKSPACE:REMOTE-PATH ```text This command will run in the foreground until the sync session is over. Stop the command (using Ctrl-C) will also stop the sync session. ## IDE Launch WebIDE in browser. ```shell cs ide [PATH|.|~] ```text Launch WebIDE in browser and opens a [checkout](https://docs.sandboxes.cloud/docs/sandbox-definition#checkouts) or home directory if `.` or `~` is specified as the argument. ##### Flags * `--workspace`, `-W`: Specify the workspace in the format of `SANDBOX/WORKSPACE`. If unspecified, it will prompt for a selection. ## VSCode Launch a local-installed VSCode to connect to a workspace using SSH remote development extension. ```shell cs vscode [PATH|.|~] ```text Launch a local-installed VSCode and opens a [checkout](https://docs.sandboxes.cloud/docs/sandbox-definition#checkouts) or home directory if `.` or `~` is specified as the argument. ##### Flags * `--workspace`, `-W`: Specify the workspace in the format of `SANDBOX/WORKSPACE`. If unspecified, it will prompt for a selection. ## JetBrains IDE Currently remote development is supported using JetBrains Gateway: ```shell cs jetbrains ```text This command will automatically download JetBrains Gateway, install Crafting plugin and launch the IDE connected to a remote workspace. ##### Flags * `--ide=TYPE`: Select an IDE type. Default is `IntelliJ`, other options are `GoLand`, `RubyMine`, `PyCharm`, `CLion` and `WebStorm`; * `--gateway`: Launch JetBrains Gateway UI, do not connect automatically. ## Daemon Management Manage [daemons](https://docs.sandboxes.cloud/docs/repo-manifest#daemons) inside a workspace. A daemon process must be defined in [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest). ```shell cs ps cs up [DAEMON-NAME...] cs down [DAEMON-NAME...] cs restart [DAEMON-NAME...] ```text When running inside a workspace, without additional flags (`--workspace` or `-W`), the CLI targets the current workspace. ##### Flags * `--workspace`, `-W`: Specify the workspace in the format of `SANDBOX/WORKSPACE`. If unspecified, it will prompt for a selection. ## Job Management Manage [jobs](https://docs.sandboxes.cloud/docs/repo-manifest#jobs) inside a workspace. ```shell cs job enable [JOB-NAME...] cs job disable [JOB-NAME...] ```text When running inside a workspace, without additional flags (`--workspace` or `-W`), the CLI targets the current workspace. ##### Flags * `--workspace`, `-W`: Specify the workspace in the format of `SANDBOX/WORKSPACE`. If unspecified, it will prompt for a selection. ## Log View tail logs of daemons, setup actions etc. ```shell cs log NAME ```text The CLI will try to match the best target based on `NAME` for fetching logs. It may prompt for a selection if multiple matches are available. With flags, the scopes can be further narrowed. ##### Flags * `--workspace`, `-W`: Specify the workspace in the format of `SANDBOX/WORKSPACE`. If unspecified, it will target the current workspace if running inside it, or prompt for a selection; * `--action`, `-a`: Match `NAME` against the action names in a task (specified by `--task`); * `--task`, `-t`: Specify task name, only used when `--action` is in use; * `--kind`, `-k`: Match specific process type (exclusive from `--action`), one of: * `daemon` or `d`: for daemons; * `job` or `j`: for jobs. * `--path`, `-p`: Specify the [checkout](doc:sandbox-definition:#checkouts) path for matching the process; * `--lines`, `-n`: Number of lines to print from the tail log, default is the same as `tail` command; * `--follow`, `-f`: Watch and follow new logs. ##### Examples ```shell # Show the only daemon log or select one cs log # Show Build log during setup cs log -a build # Follow daemon log cs log -f server # Tail more lines and follow cs log -n 1000 -f server ```text ## Port Forward Bi-directional port-forwarding between local and a workspace. ```shell cs port-forward ```text By default, this command will establish port forwarding between local machine (where the CLI runs) and a workspace (usually specified by `--workspace` flag): * All ports defined in the workspace are forwarded from workspace to local machine (`localhost`) with the same destination port numbers; * All rules defined in [port\_forward\_rules](https://docs.sandboxes.cloud/docs/sandbox-definition#local-port-forwarding) are forwarded from local machine to the workspace. The CLI tries to listen on the *local* ports as specified by `port_forward_rules`, however it may fail if the port is already in-use, and the CLI will skip that rule and keep others running. The default behavior can be overridden by flags. ##### Flags * `--workspace`, `-W`: Specify the workspace in the format of `SANDBOX/WORKSPACE`. If unspecified, it will prompt for a selection; * `--skip-exposed-ports`, `-P`: Skip all *exposed* ports on the workspace; * `--skip-forward-rules`, `-F`: Skip all rules in `port_forward_rules`; * `--reverse`, `-R`: Specify an explicit incoming forwarding rule, in the format of `REMOTE-PORT:LOCAL-HOST:LOCAL-PORT`, where * `REMOTE-PORT`: a port number on the workspace and it is not necessary one of the exposed ports; * `LOCAL-HOST`: a hostname that the traffic will be forward to, it can be `localhost` or any hostname that's reachable from the local machine; * `LOCAL-PORT`: a port on the host specified by `LOCAL-HOST` that a connection will be forwarded to. * `--local`, `-L`: Specify an explicit outgoing forwarding rule, in one of the formats: * `LOCAL-PORT:REMOTE-PORT`: forward `localhost:LOCAL-PORT` to `REMOTE-PORT` (a port number) on the workspace; * `LOCAL-PORT:HOST:PORT`: forward `localhost:LOCAL-PORT` to remote, based on `HOST`: * `HOST` is `localhost`: `PORT` can be either a number or name of an exposed port, and the forward target is the workspace; * `HOST` is not `localhost`: then it's must be a workload name, and `PORT` must match an exposed port of that workload, either by port number or name. The forward target is the specified workload and port. * `LOCAL-ADDR:LOCAL-PORT:HOST:PORT`: same as above rule, however the local listening address is `LOCAL-ADDR:LOCAL-PORT` instead of `localhost:LOCAL-PORT`; ##### Examples ```shell # Incoming forward only cs port-forward -F # Outgoing forward only cs port-forward -P # Add an incoming forwarding rule cs port-forward -R 8080:localhost:8080 # Specify all rules explicitly (disable defaults) cs port-forward -FP \ # disable the defaults -R 8080:localhost:8080 \ # incoming workspace port 8080 to localhost:8080 -L 9000:9000 \ # outgoing from localhost:9000 to workspace 9000 -L 5000:backend:api \ # outgoing from localhost:5000 to workload "backend" port "api" -L 5001:localhost:5001 \ # outgoing from localhost:5001 to workspace localhost:5001 -L :5002:localhost:5001 # outgoing from *:5002 to workspace localhost:5001 ```text ## Exec Run a command inside a [container workload](https://docs.sandboxes.cloud/docs/containers). ```shell cs exec -- command... cs exec -W SANDBOX/WORKLOAD -- command... cs exec --tty -- command ... # Force using TTY cs exec -T -- command... # Force disabling TTY ```text ##### Flags * `--tty`, `-t`: Force using TTY; * `--disable-tty`, `-T`: Disable TTY; * `--uid`, `-U`: Run as the specified UID. ## Wait Wait for a sandbox to become ready or a workload to become ready. ```shell cs wait sandbox NAME # Wait until the sandbox state becomes Ready or Problematic/Failed cs wait service WORKLOAD # Wait for the readiness of a workload ```text The command `cs wait service` is useful to synchronize the initialization between multiple workloads. For example, a workspace needs to seed some data into a database during the build process (e.g. automatically triggered, during setup), and it's possible the dependency (e.g. mysql) is still being started and not ready yet. In this case, the build hook script of the workspace can include `cs wait service` command, like: ```shell #!/bin/bash do_build cs wait service mysql do_seed_data ```text ##### Flags * `--timeout`: Maximum duration to wait. If unspecified (or zero value), it will wait indefinitely. The value is suffixed by a unit of `h` (hour), `m` (minute), `s` (second) or `ms` (millisecond). For example: `1h`, `5m`, `300ms`, or `6m30s`, etc; * `--sandbox`, `-S`: Only applies to `cs wait service` command. When used, the CLI may run outside of a workspace, or wait for a workload in a different sandbox. The value is the sandbox name. ## Docker Run the `docker` command with credential-helper hooked. This is used when pushing an image to the org-scoped private container registry. ```shell cs docker -- push cr.sandboxes.cloud/myorg/path/myimage:tag ```text ## Inside Workspace Only The following command are available when the CLI is running inside a workspace. #### Run Hook Run a hook script defined in [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest). ```shell cs run-hook NAME ```text The `NAME` can be `post-checkout` or `build`. The hook script is directly run in foreground by the CLI, not by the workspace agent. This is for debugging purpose as there may be slightly difference between the CLI environment and the workspace agent. #### Build This is a shortcut for `cs run-hook build`. ```shell cs build ```text #### Banner Control By default, a welcome banner is displayed when an interactive shell is opened (via SSH or VSCode terminal). This can be suppressed by `mute/unmute` command. ```shell cs banner mute # Do not display the banner cs banner unmute # Display the banner ```text ## Endpoint Alias #### Create ```shell cs endpoint-alias create ENDPOINT-ALIAS-NAME [SANDBOX-NAME ENDPOINT-NAME] ```text It creates an *Endpoint Alias* with name specified by `ENDPOINT-ALIAS-NAME`. The final DNS is derived from that name and the org name. For example, `cs endpoint-alias create foo` in org `bar` will generate the DNS `foo-bar.sandboxes.run`. When `SANDBOX-NAME` and `ENDPOINT-NAME` is specified, the newly created *Endpoint Alias* is assigned to that endpoint, or it's *Unassigned*. #### List ```shell cs endpoint-alias list ```text It shows a list of all Endpoint Aliases. #### Assign ```shell cs endpoint-alias map ENDPOINT-ALIAS-NAME [SANDBOX-NAME ENDPOINT-NAME] ```text The assignment can be changed at any time. If `SANDBOX-NAME` and `ENDPOINT-NAME` are unspecified, the Endpoint Alias becomes *Unassigned*. #### Remove ```shell cs endpoint-alias remove ENDPOINT-ALIAS-NAME ```text ## Service Account and Login Token ```shell cs org service-account create NAME --display-name "DISPLAY NAME" --role ROLE cs org service-account remove NAME cs org login-token create ACCOUNT_EMAIL --valid-since TIME --expiry TIME \ --redirect-path PATH --url cs org login-token remove PARTIAL_TOKEN cs org login-token list cs org login-token show PARTIAL_TOKEN ```text When create/remove a service account, only `NAME` is provided, and it will generate the account email as `NAME@org.sandbox`. `DISPLAY NAME` is optional. Additional `ROLE` can be specified with one or more `--role` flags. Current available roles are: `org-admin`. When create a Login Token, the full `ACCOUNT_EMAIL` must be provided, e.g. `NAME@org.sandbox`. The flags `--valid-since` and `--expiry` is highly recommended. The value can be one of the following formats: * `+DUR`: now plus a duration, e.g. `+30m`, `+4h`, `+1h20m`, etc; * `-DUR`: now minus a duration, e.g. `-30m`, `-4h`, `-1h20m`; * `@TIME`: at a specific time, e.g. `@12:10`, `@2022-06-07 21:30:00` The `--redirect-path` can be specified to redirect to the specified path after login on the Web Console. When `--url` is specified, the full login URL is printed instead of the token itself. For *remove* and *show* commands of Login Token, a sub-string in the token can be provided as `PARTIAL_TOKEN` and the command will match the token. A *Login Token* can be shared with a non-member of the organization to login from: * Web Console: ` * CLI: `cs login -t TOKEN` Remember that if you are using self-hosted Crafting, please specify first `CRAFTING_SANDBOX_SERVER_URL=https://your.site.address` For CLI use, a more secure practice is to put the token in a file, and use the following environment to point to the file, for example: ```shell export CRAFTING_SANDBOX_AUTH_TOKEN_FILE=/somefolder/token cs login ```text Or ```shell export CRAFTING_SANDBOX_AUTH_TOKEN=token cs login ```text ## External Infrastructure and Kubernetes For the details about Kubernetes support, please read [Setup for Kubernetes](https://docs.sandboxes.cloud/docs/kubernetes-setup). #### Connect a Kubernetes Cluster ```shell cs infra connect kubernetes [NAME] ```text It installs the Crafting Kubernetes Agent into the current cluster and registers the cluster in the Crafting system under `NAME` which is only used on the Crafting system side for referencing the cluster and not necessary to be the exact cluster name. ##### Flags * `--subnets`: a comma-separated CIDRs represents the subnets accessible in the cluster, and the agent will tunnel through the sandbox to access these subnets when interception is on. If unspecified, the command will try to detect the in-cluster Pod subnet and Service subnet. If failed, it will prompt for entering the CIDRs manually. Specifically for AWS EKS clusters, as they are using VPC subnets directly, the command won't be able to detect the Service subnet. In this case, simply provide the VPC CIDR; * `--apiserver-proxy-clusterrole`: The cluster role that the API server proxy will run under. This is also the identity used for the sandboxes to access the API server. For most development usage, the default value is `cluster-admin`; * `--disable-apiserver-proxy`: Disable the API server proxy completely. Sandboxes won't be able to access the API server through the agent. Additional setup is required (see [Setup for Kubernetes](https://docs.sandboxes.cloud/docs/kubernetes-setup)) if API server access is still desired. #### List Connected Clusters ```shell cs infra list ```text #### Disconnect a Cluster ```shell cs infra disconnect [NAME] ```text This command will uninstall the Crafting Kubernetes agent and unregister the cluster from the Crafting system. Before the operation, the command will perform some checks and aborts if there's any error. The flag `--ignore-check-errors` can be used to continue the operation if there're errors. However agent uninstallation will be disabled if there's any error. As a result, the cluster is unregistered with agent still running inside. With `--force-uninstall`, the uninstallation will be attempted after the cluster is unregistered. To manually uninstall the agent, simply delete the namespace `crafting-sandbox`. ##### Flags * `--ignore-check-errors`: continue unregistering the cluster even there were check errors; * `--force-uninstall`: always attempt uninstalling the agent even there were check errors. ## CLI Extensions A *CLI extension* is an executable with file name like `cs-FOO`, so the command `cs FOO` will invoke the executable `cs-FOO` with rest of the command line arguments. An extension can be placed in any folder that can be looked up in `PATH` environment variable, or a git repository containing these files can be installed. ```shell cs extensions install git@github.com:example/cs-ext cs extensions install https://github.com/example/cs-ext cs extensions install /absolute-path-to-a-local-folder cs extensions list cs extensions uninstall [PARTIAL-NAME] ```text #### Install Only two kinds of sources can be installed: * A git repository * A local folder The same git repository but with different versions (branch/tag) are treated as different sources. When looking up an extension command, the installed git repository will be updated automatically based on the default or specified version. ##### Flags * `--version`: this is only used when a git repository is installed. Default is the `master` branch; * `--subdir`: specify a sub directory inside the installed repository or folder for extension executables. By default, only the top-level directory is searched. The extension executables are only looked up in one level of directory, not recursively. #### Uninstall When uninstalling a git repository, the original repository URL and version must be matched. From the command line, partial content of the original URL can be provided, and the CLI will help to match the installed sources. If there are multiple matched, the user is asked to select one of them. --- # Concepts and Architecture In this section, we explain some of the core concepts behind Crafting Sandbox and dig deeper on its architecture. ## Core Concepts ### Sandbox A `sandbox` is a self-contained development environment. It includes one or multiple `workspaces`, and also `containers`, `dependencies`, `resources`, `endpoints` as needed. These components are grouped together in a sandbox and share the same life-cycle, e.g., creation, suspension, deletion, etc. Each sandbox has its own private network to connect the components in it together. ### Workspace A `workspace` is a dev container installed with Linux OS and dev tools, functionally equivalent to a developer VM on cloud. It's the main component that developers interact with because it's where the source code is checked out and where the service being developed runs. ### Dependency & Container `Dependencies` and `containers` act as supporting roles in the sandbox. In addition to the service they actively develop on, developers usually need other services which they don't need to modify available as well. These include standard services like Postgres database, Redis cache, or other specific services running in their own standard containers. A `dependency` is a well-known standard service like database, cache, etc. Crafting supports a list of commonly used ones like `MySQL`, `Postgres`, `DynamoDB`, `Redis`, `ElasticSearch`, `Memcache`, `RabbitMQ`, etc., with multiple versions each. Users can just specify name and version to make them available conveniently. In addition, users can bring any custom `container` to the sandbox by specifying container tag/image, environment variables, etc. in order to achieve maximum flexibility. ### Resource In addition to `dependencies` and `containers`, developers sometimes need to access or provision additional cloud native components for their development, such as AWS lambda function, Kubernetes namespace, etc. These components are physically outside of the sandbox but they need to work closely with components in the sandbox and their life-cycle should be in-sync with the sandbox. A `resource` represents an external component that can work alongside with the components in the sandbox. Crafting allows the `resource` to be provisioned by Terraform or user-defined custom scripts. The life-cycle of the `resource` is maintained in-sync with the sandbox so that the provisioned external components can be properly created and destroyed. ### Endpoint For security, the ports exposed by `workspaces`, `dependencies`, and `containers` are only visible to the internal network in the `sandbox`. An `endpoint` creates an external facing URL serving as the ingress to the sandbox. Traffic to an `endpoint` can be routed to ports on `workspaces`, `dependencies`, and `containers` in the sandbox. Advanced routing rules and authentication can be added to `endpoints` ### Template To make the development environments standard and replicable, the specific configuration of a `sandbox` can be saved as a `template` . A `template` defines the specs of components within a sandbox such as `workspaces`, `endpoints`, etc., and users can create sandboxes from any templates they define. Since a `template` is often used to represent the entire app end-to-end, it's also called `app`. ### Other concepts Other concepts like `Secret`, `Snapshot`, `Repo Manifest`, etc. are discussed in detail in other parts of the documents. ## How does Crafting Fit in Your Workflow As shown above, Crafting is made to assist developers as the dev environments. It does not replace your current source control system or CI/CD pipeline. It typically used *before* the code change (Pull Request) is merged into the main branch. Crafting is also language/technology agnostic. Based on the operation system and network layer, it does not restrict itself to any particular programming language, framework, or technology. No matter you are frontend engineer working on Javascript, or backend engineer with Java/Go/Python etc. as long as you can set up your environment on a Linux machine, you can take advantage of Crafting. It can support developers to do full-on-the-cloud development, where developers use Web IDE or Desktop IDE to directly modify the source code in the online workspace. Or it can be used together with local machine in `hybrid mode` by providing an end-to-end context on cloud for the service being worked on locally. In addition, developers can create sandbox per Pull Request to do end-to-end preview of their changes, in a production-like environment. ### Full-on-the-Cloud Development For the full-on-cloud development, developers don't need to bother set up and maintain dev environments on their local machine. All they need to install is the browser or remote-capable IDEs, such as VS Code or JetBrains IDEs. They can use them to connect to Crafting Sandbox and edit code remotely on a fully prepared dev environment with Linux OS and all the dev tools. Advantages of full-on-the-cloud development include: * The dev environments are standardized and centrally managed, always stable and ready to code. * Leverage powerful cloud machines without limited by local CPU and memory. * Library & toolchain updates and security patches are always fresh, no local maintenance. * Code anywhere, any time, completely portable. * Linux OS, architecture consistent with production machines. * Easy for remote collaboration and trouble-shooting. Crafting supports following ways to code directly on cloud workspaces: * SSH on the cloud workspace and use terminal based editor, such as Vim, Emacs, Nano, etc. * VS Code Web IDE * VS Code Desktop connecting to remote via SSH * JetBrains IDEs, such as IntelliJ, RubyMine, PyCharm, GoLand, etc. ### Cloud-and-Local-Hybrid Development For the hybrid development, developers still maintains their local dev environment and uses their favorite local IDE to work on a code base checked out on their local machine. They can run the service they focus on locally as well. But to actually test run their changes, they can have all the heavy-lifting services their service depends on to run on the cloud. That way, they don't need to bother setting up all the services they don't touch locally and the resource usage on their local machine is minimized. Through traffic forwarding, their local machine is virtually replacing the corresponding service on cloud to have an end-to-end flow. Advantages of full-on-the-cloud development include: * Developers have near-zero workflow change from their local machine dev experience. * Heavy dependencies and services are off-load to cloud and no longer consumes local resources. * Developers can choose to use port forwarding to route traffic between local and cloud to have an end-to-end experience * Developers can also choose to use code/file sync with remote workspaces to build/run services on cloud workspaces. ### Production-like Preview Crafting is often used for end-to-end preview in the development workflow. After a code change (e.g., Pull Request) is submitted for review, developers often need to verify how the change behave in an end-to-end production-like environment before merging it in. Crafting lets developers create a whole production-like environment on-demand easily in a resource-efficient manner, so that developers as well as product managers, designers, and QA can preview the change. Advantages of production-like preview include: * Developers no need to fight for a shared staging to preview their change end-to-end * Changes can be reviewed by cross-functional teams early, minimizing iteration cycle * Easy creation and auto clean up to manage production-like environments * Leverage existing production config to achieve high fidelity to production Crafting offers a general support for you to replicate your production environment. Anything you can provision with Terraform or using your script, you can replicate it on Crafting. * CPU architecture, operating systems, and networks can match your production * Custom containers can be pulled from your registry * Special support for running Kubernetes-based apps and services. * Serverless cloud native resources such as AWS Lambda, SQS, etc. can be allocated on-demand ## Crafting Architecture Here, we dig a little deeper on Crafting's internal architecture. As shown above, Crafting is running in a Kubernetes cluster. It has a control plane with its management services such as API service, reconciler, etc., and a database to store metadata. It manages the nodes (VM hosts) in the machine pool and runs the sandboxes' workloads, e.g., workspaces, dependencies, etc. as containers on the nodes. Workloads in one sandbox can be physically running across multiple nodes, making it very scalable and not limited by per-machine resource constraints. An overlay network is setup by Crafting to connect workloads within a sandbox and ensure isolation between different sandboxes. The overlay network is also responsible for achieving more advanced functionalities such as on-demand traffic routing. Inside a workload, Crafting uses different setup for different workload types. For example for a dependency, it starts from a container image, with possibly a data snapshot applied on top, and it has process management and logging to support the running service to finally expose a port. For a workspace, where developers use as cloud machine for interactive development, there are more components. It starts from an Linux OS image (default Ubuntu, can be substituted by user), with possibly file system snapshot for user's customization, on organization level or per-developer. Then it's the automation layer to manage the machine's environments, including dev packages, open ports, environment variables, secrets, port-forwardings, checkout management, build system, and process management. On the top layer, it's the source code user checks out and process running based on the source code, as well as IDE backend to support remote coding. Then the developer uses their local machine to access the workspace, via SSH terminal, Web IDE running in a browser, or desktop IDE connecting to the IDE backend. The developer also manages the sandbox system from the web console, which is supported by Crafting control plane. ### Working with your own Kubernetes cluster Crafting for Kubernetes allows you to connect existing Kubernetes clusters (not the cluster dedicated to Crafting installation) to the Crafting platform for doing preview and traffic interception, etc. Please see the use [Develop on Kubernetes](https://docs.sandboxes.cloud/docs/kubernetes-dev) for details. ### Working with cloud native serverless resources Crafting allows developers to access cloud native serverless resources on their cloud providers like AWS and GCP directly from the workspaces. The main issue to establish proper authentication and authorization for such access. Crafting provides solutions like identity federation and stored secrets for users to achieve that. Please see [Develop with cloud resources](https://docs.sandboxes.cloud/docs/cloud-resources-dev) for details. --- # Setup containers and dependencies This page talks about how to setup built-in dependencies such as `Postgres`, `Redis`, `ElasticSearch`, etc., and custom containers to support your services running in `workspaces`, specifically: * [Configure a dependency](#configure-a-dependency) * [Configure a container](#configure-a-container) * [Container setup options](#container-setup-options) * [Volumes & data snapshots](#volumes--data-snapshots) * [Volume sharing](#volume-sharing) * [Debug a container](#debug-a-container) * [Build your own image](#build-your-own-image) To add a dependency or container, from the editing view of a [Standalone sandbox](https://docs.sandboxes.cloud/docs/standalone-sandbox), we can click `Add Component` from the editing view, as highlighted below, and select `Dependencies` or `Containers` in the dialog, respectively. ## Configure a dependency To configure the newly added dependency, we can click into its detailed view. As shown above, we can edit the following info: * **Name of the dependency** (will be used as hostname) * **Type and version of the dependency** Commonly used services and versions are supported here, for a detailed list of what's supported, please see [https://sandboxes.cloud/dependencies](https://sandboxes.cloud/dependencies) * **Property values** Many containers support a list of properties that users can customize to the values they want. For example, for `postgres`, you can customize a pre-registered user with `username` and `password`, and a pre-created database with `database` * **Default snapshot** We can optionally specify a snapshot of state we want to apply to this dependency in all newly created sandboxes. See [Save and load data snapshots](https://docs.sandboxes.cloud/docs/data-snapshots) for how to create such a snapshot. ### Configure a container Sometimes in addition to the commonly used built-in dependencies, we need more custom containers to run alongside our services in the same sandbox. Crafting system supports running a workload directly using a container image from a public container registry, (or private registries for Crafting Self-hosted), in order to provide more flexibility to developers in case the current dependency services are not sufficient, such as: * Need a supporting service that Crafting doesn't have as a built-in dependency. * Need a specific version of a supporting service that Crafting doesn't have in the built-in list * Need to add certain customization, e.g. a special config file in the supporting service. In addition, introducing additional containers is a great way to add dev tools that make the dev environment more convenient, e.g. adding SQL Pad as UI for querying database in sandbox. Next, we will use SQL Pad as an example to walk about how to config containers #### Container setup options The dependency `mysql` helps running my service, it will be great if there's a simple UI to access the database. We can use the [sqlpad container](https://hub.docker.com/r/sqlpad/sqlpad/) to provide an experience that it's ready to use when a new sandbox is created. As shown above, we can edit many options for the container we want to add, from image of the container to ports, ENVs, and overrides for entrypoints, arguments, working directory, etc. The schema follows most of the docker container image configuration. Properties `entrypoint`, `args`, `cwd` corresponds to [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint), [CMD](https://docs.docker.com/engine/reference/builder/#cmd) and [WORKDIR](https://docs.docker.com/engine/reference/builder/#workdir) in a Dockerfile. For the full definition, please checkout [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition#containers). Note that it's important to declare the exposed ports in the containers. It's NOT inferred from the container image. The example YAML for the SQL pad config is shown below ```yaml # My workspaces workspaces: - name: myapp checkouts: - path: src repo: git@github.com:examples/myapp ports: - name: http port: 8000 protocol: HTTP/TCP # myapp needs mysql dependencies: - name: mysql service_type: mysql properties: database: myapp username: myapp # Add sqlpad for the team containers: - name: sqlpad image: sqlpad/sqlpad:latest env: - SQLPAD_AUTH_DISABLED=true - SQLPAD_AUTH_DISABLED_DEFAULT_ROLE=admin - SQLPAD_CONNECTIONS__myapp__name=myapp - SQLPAD_CONNECTIONS__myapp__driver=mysql2 - SQLPAD_CONNECTIONS__myapp__host=mysql - SQLPAD_CONNECTIONS__myapp__database=myapp - SQLPAD_CONNECTIONS__myapp__username=myapp - SQLPAD_DEFAULT_CONNECTION_ID=myapp ports: - name: web port: 3000 protocol: HTTP/TCP # Exposed endpoints endpoints: - name: app http: routes: - path_prefix: / backend: target: myapp port: http - name: sqlpad http: routes: - path_prefix: / backend: target: sqlpad port: web ``` With the above Sandbox Definition, the *sqlpad* can be accessed using a URL like [https://sqlpad--sandbox-myorg.sandboxes.cloud](https://sqlpad--sandbox-myorg.sandboxes.cloud). By default, it's authenticated. #### Volumes & data snapshots The container in Crafting handles volumes differently from a container run in docker or Kubernetes. In a sandbox, the filesystem of a container workload is always persisted. Restarting a container will have all files preserved. Because of this, it's not necessary to explicitly specify volume mounts to persist data folders as usually people do with docker or docker compose. A volume is only needed when it's being shared with multiple containers (read and write), or special content (config, secret etc.) should be mounted into the container. To add a volume, click the `Add Component` in the editing view (shown above) and choose `volumes` Like dependencies, we can take data snapshots for containers as well. Note that the snapshot for containers requires a pre-defined volume and only includes the data in that volume. Please make sure the data file for the service running in the container is on the defined volume. Please refer to [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition#volumes) for the details of types of volumes and how to define them. #### Volume sharing Specifically, when a [regular volume](https://docs.sandboxes.cloud/docs/app-definition#regular-volume) is shared by more then one containers, the volume is mounted for reading and writing. This usage may imply a co-location of these container on the same host, and thus the scalability may be impacted. Here's an example of two [filebrowser container](https://hub.docker.com/r/filebrowser/filebrowser) sharing the same volume: ```yaml containers: - name: files1 image: filebrowser/filebrowser:latest ports: - name: web port: 80 protocol: HTTP/TCP volume_mounts: - name: files path: /srv - name: files2 image: filebrowser/filebrowser:latest ports: - name: web port: 80 protocol: HTTP/TCP volume_mounts: - name: files path: /srv volumes: - name: files endpoints: - name: files1 http: routes: - path_prefix: / backend: target: files1 port: web - name: files2 http: routes: - path_prefix: / backend: target: files2 port: web ``` #### Debug a container The logs of a container workload can be viewed the same way as a workspace or a dependency. Additionally, the `cs exec` (see [reference](https://docs.sandboxes.cloud/docs/command-line-tool#exec) for details) can be used to run a command inside the container. However, the executable must exist on the filesystem inside the container. Port-forwarding is supported the same way as a workspace, use `cs port-forward` #### Build your own image For your convenience, Crafting system provides a private container registry for each organization using Crafting SaaS. It can be accessed with prefix: `cr.sandboxes.cloud/ORG/`. And a docker wrapper command `cs docker` provided for pushing the image. Here are the push steps: * Build your image using a Dockerfile, with `docker build ...` command; * Tag the image with sandbox private container: `docker tag LOCAL-IMAGE cr.sandboxes.cloud/ORG/NAME:TAG` * Push the image using `cs docker -- push cr.sandboxes.cloud/ORG/NAME:TAG` Here's an example, assuming organization is `myorg`: ```shell docker build -t cr.sandboxes.cloud/myorg/shared/myservice:latest src/myservice/docker cs docker -- push cr.sandboxes.cloud/myorg/shared/myservice:latest ``` And use that image in a container workload: ```yaml containers: - name: myservice image: cr.sandboxes.cloud/myorg/shared/myservice:latest ``` --- # Copy files between local and cloud During development, it's often needed to copy some input files from your local machine to the Crafting workspaces or copy results files back. In this page, we describe how to copy files between local and cloud. It can be simply done via `cs scp` command from the Crafting CLI, which is a simple wrapper of the `scp` command ```shell $ cs scp : # copy file from local to cloud $ cs scp : # copy file from cloud to local ``` Details of `cs scp` can be found [here](https://docs.sandboxes.cloud/docs/command-line-tool#scp) To sync directories between local machine and Crafting workspace, there are other tool integration like `cs rsync`, `cs mutagen` available, please see [Code sync for hybrid development](https://docs.sandboxes.cloud/docs/code-sync) for their usage, and reference can be found [here](https://docs.sandboxes.cloud/docs/command-line-tool) --- # Save and load data snapshots In this page, we describe how to use data snapshots to help your development. A `Data Snapshot` (a.k.a. `Dependency Snapshot` or `Container Snapshot`) is to capture the filesystem state of a dependency or a custom container. Note that for built-in dependencies, such as `postgres`, `redis`, etc., Crafting platform will automatically save the data. But for custom containers, a snapshot can be taken only to save the defined `Volume` mounted on the container. Typically for a stateful service released as a container, there is clear documentation on where is the persistent data should should be stored. In the remainder of this page, we will cover: * [How to save data as a data snapshot](#how-to-save-data-as-a-data-snapshot) * [How to load a data snapshot](#how-to-load-a-data-snapshot) * [Admin guide: setup default data snapshot in template](#admin-guide-setup-default-data-snapshot-in-template) ## How to save data as a data snapshot To take a data snapshot for a dependency or container, we can do it directly on the web console, as shown below. After clicking the save snapshot button highlighted above, we can input the name of the snapshot and click `Confirm` to save it. During saving of the snapshot, it would temporarily bring the service offline and restart the service when the snapshot is successfully taken. Then the snapshot will show up under the `Resource -> Snapshots` page from the menu. We can also take a snapshot using CLI command `cs snapshot create` as follows ```shell $ cs snapshot create -W [DEPENDENCY|CONTAINER] ``` ## How to load a data snapshot To load a data snapshot into an existing sandbox, it can be done via web console as follows After clicking the restore snapshot button highlighted above, we can select the snapshot to load and click `Confirm` to load the snapshot into the sandbox. ![](https://files.readme.io/0c79e7c-image.png) Note that the existing data for the target service in the sandbox will be overwritten by the snapshot. And during the restore, the service will be brought down for some time. Also please keep in mind that sometimes the data format used in one version of the database are not compatible with another version, using an incompatible data snapshot may not be able to launch the service properly. We can also restore a snapshot using CLI command `cs snapshot restore` as follows ```shell $ cs snapshot restore -W [DEPENDENCY|CONTAINER] ``` To load a snapshot in a new sandbox, we can simply select the snapshot in the drop down from the customization page. This way, we can select a snapshot other than the default one specified in the Template. ### Admin guide: setup default data snapshot in template A default data snapshot can be set for a dependency or container to preload the newly created sandbox with standard test data by default. After a snapshot is taken from a sandbox, we can modify the Template to use a snapshot by default. It can be done from web console Or in the yaml file of the [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition) ```yaml dependencies: - name: mysql service_type: mysql snapshot: snapshot-mysql ``` --- # Developer Workflows In this section, we walk though the common workflows on Crafting from a developer point of view. This is assuming the proper set up is already done. For more information on sign-on and setup, please take a look at [Quickstart Guide](https://docs.sandboxes.cloud/docs/quick-start) and [Admin Overview](https://docs.sandboxes.cloud/docs/admin-overview), respectively. ## Demo Videos Below are links to demo videos walking through the common workflows: [One Click Dev Environments](https://bit.ly/crafting-democ1) (6:17) * Creating a dev environment (Crafting Sandbox) with frontend and backend up-and-running, * Develop code in WebIDE as well as desktop IDE connecting to remote container * Run in local/remote hybrid mode with port forwarding [Integration Testing & Preview](https://bit.ly/crafting-democ2) (6:01) * Launch a preview from a PR and run everything end-to-end * Debug in Crafting Sandbox with code modifications instantly effective * Swapping a service with the local version and debug using breakpoints with desktop IDE [Remote Collaboration](https://bit.ly/crafting-democ3) (4:51) * Pair programming and see your teammates editing code live with Crafting Sandbox * Work QA team to get a environment which can reproduce test failures * Work with third party callbacks from Internet and external collaborators [Crafting for Kubernetes](https://bit.ly/crafting-democ4) (7:17) * Launch per-developer on-demand production-like Kubernetes deployments with Crafting Sandbox and manage their lifecycles * Interactively write code and see results immediately without re-launching containers by traffic interception * Easy setup for your own Kubernetes cluster using your existing config [Crafting with Cloud Resources](https://bit.ly/crafting-democ5) (5:24) * Make cloud native serverless services like RDS, SQS and Lambda, work alongside containers * Setup identity federation to provide developers in Crafting Sandbox with seamless and secure access to your cloud * On-demand provision cloud resources and manage their lifecycle using Crafting Sandbox ## Main Development Flow Here we work through a typical workflow for using Crafting in day-to-day development on a high level. For any specific use case, please see the corresponding section for more details, the list of top use cases can be found [here](https://docs.sandboxes.cloud/docs/quick-start#list-of-use-cases), for example, if you are using Kubernetes to orchestrate the services in your app, you can take a look at [Kubernetes Development and Testing](https://docs.sandboxes.cloud/docs/use-case-kubernetes) To start working on a code change, you can create a dev environment on Crafting. It is up to you to choose to treat the dev environment as *ephemeral* and discard it whenever you are done with the code change, or to stick to an environment and use it for a long term. All your local changes in the environment is persisted. The dev environments on Crafting is organized as `Sandbox`, which can be constructed on demand according to your predefined template. A sandbox can have one or multiple `Workspaces`, which are dev containers with a standard Ubuntu image and dev tools (You can use your favorite container image as well). Each workspace acts like an online VM and supports you to code for your codebase. Workspaces in the same sandbox are connected by a virtual network, along with other components such as `containers` and `dependencies`, so that services running in different components can work with each other. To create a sandbox, you can go to our web console and enter the create page. The following shows the components diagram for our demo and the sandbox creation page, with several workspaces, containers, and dependencies. To launch the sandbox, you can just click the **LAUNCH** button on the top right corner and a sandbox will be created. During sandbox creation, Crafting platform *prepares your sandbox to be fully ready for you to develop on*. Based on your setup, it checks out the source code into corresponding workspace folders, installs all the dev packages, builds your code with powerful cloud machines, sets up your database with migrations and seed data, and even runs the services end-to-end for you. The following shows a launched sandbox. From here, you can open the Web IDE directly for any workspace to start coding, which also includes a terminal for you access it via command line. The Web IDE is based on the open-source VS code, whose coding experience is as good as native VS code. Given the source code in sandbox is managed by git as a checkout, you can commit your change and push it back to the repo from your workspace directly. You can also access the workspace via SSH from your local machine via our command line tool, `cs`. If you prefer to use your desktop version of VS code or JetBrains IDEs, such as IntelliJ, RubyMine, PyCharm, etc., you can also run these locally and connect to sandbox via SSH to directly modify the remote codebase. See more details at [Code with VS Code](https://docs.sandboxes.cloud/docs/code-with-vs-code) and [Code with JetBrains IDEs](https://docs.sandboxes.cloud/docs/code-with-jetbrains-ides) --- # Docker in Workspace The workspace has integrated docker CLI and daemon and will automatically start the docker daemon on the first time it's being used. For isolation and security, the docker daemon is only allowed to run in rootless mode. ## Use My Own Docker Installation If the version of docker daemon provided by the workspace is not the desired one, you can install the full docker suite yourself. Here's an example of installing your own owner version (take `24.0.6` as an example): ```shell curl -sSfL https://download.docker.com/linux/static/stable/x86_64/docker-24.0.6.tgz | sudo tar -C /usr/local -xz sudo mv /usr/local/docker/docker /usr/local/bin/ ``` After the workspace restarts (e.g. suspend/resume), the next time using docker, it will be the version you installed. ### Install buildx The command `docker buildx` is provided by the [buildx docker CLI plugin](https://github.com/docker/buildx). Install it on Crafting sandbox is straightforward (please change the version in the URL accordingly): ```shell sudo mkdir -p /usr/local/lib/docker/cli-plugins sudo wget -O /usr/local/lib/docker/cli-plugins/docker-buildx https://github.com/docker/buildx/releases/download/v0.11.2/buildx-v0.11.2.linux-amd64 sudo chmod a+rx /usr/local/lib/docker/cli-plugins/docker-buildx ``` Now you can use `docker buildx` ### Install buildkit [buildkit](https://github.com/moby/buildkit) provides extended capabilities for building container images, including multi-arch images etc. The installation on Crafting sandbox is straightforward (please change the version in the URL accordingly): ```shell curl -sSfL https://github.com/moby/buildkit/releases/download/v0.12.2/buildkit-v0.12.2.linux-amd64.tar.gz | sudo tar -C /usr/local -zx ``` Then run it as a daemon by adding the file `/etc/sandbox.d/daemons/buildkit.yaml` ```yaml name: buildkit run: cmd: | mkdir -p /run/buildkit chown -R owner:owner /run/buildkit buildkitd --rootless --group owner ``` Now you can use `buildctl`. The `buildkit.yaml` can also be embedded in the Template (see details in [Workspace System](https://docs.sandboxes.cloud/docs/sandbox-definition#workspace-system), for example: ```yaml workspaces: - name: example system: daemons: - name: buildkit run: cmd: | mkdir -p /run/buildkit chown -R owner:owner /run/buildkit buildkitd --rootless --group owner ``` #### Example buildkit as docker builder The buildkit socket can be registered as a docker remote builder. Update the daemon as: ```yaml name: buildkit run: cmd: | docker buildx inspect buildkit >/dev/null 2>&1 || docker buildx create --name buildkit --platform linux/amd64,linux/arm64 --driver remote unix:///run/buildkit/buildkitd.sock mkdir -p /run/buildkit chown -R owner:owner /run/buildkit buildkitd --rootless --group owner ``` To build, use `docker buildx build --builder=buildkit ...` ### Pull Image from Private ECR After setting up AWS Access (please refer to [AWS Setup](https://docs.sandboxes.cloud/docs/cloud-resources-setup#aws-guide)), use [AWS ECR credential helper](https://github.com/awslabs/amazon-ecr-credential-helper) to enable private ECR access without storing credentials. If not already installed, install using: ```shell Shell sudo curl -o /usr/local/bin/docker-credential-ecr-login -sSfL https://amazon-ecr-credential-helper-releases.s3.us-east-2.amazonaws.com/0.7.1/linux-amd64/docker-credential-ecr-login sudo chmod a+rx /usr/local/bin/docker-credential-ecr-login ``` And add the following to file `~/.docker/config.json`: ```json { "credHelpers": { ".dkr.ecr..amazonaws.com": "ecr-login" } } ``` Then try with `docker pull .dkr.ecr..amazonaws.com/...` --- # Endpoint alias and endpoint routing An `endpoint` defined in a sandbox is exposed to the Internet using a DNS name derived from the endpoint name and the sandbox name. It's sometimes inconvenient when the endpoint is used for demo or third party integration for callbacks, because the client must know the new DNS every time to target a different sandbox. With `Endpoint Alias`, a fixed DNS name can be defined to target an endpoint from any sandbox at any time without changes on the client side. For example `demo-myorg.sandboxes.run` can always be used on the client while the target can be changed from `app--sandbox-foo-myorg.sandboxes.run` to `app--sandbox-bar-myorg.sandboxes.run` at any time. ## Common use cases ### Callbacks and webhooks from third party API For example I'm integrating a payment API which needs to callback to my service for posting the results. The callback URL must be specified on the partner side and changing it would require a long turn-around, which makes testing my API integration very tricky. With *Endpoint Alias*, I can create a fixed test callback URL and save it on my partner's side, and just by switching which sandbox it points to, developers can easily test the integration on their sandboxes. ### Shared Demo URL For example, I want to share a demo URL with my design partners so they can access our experimental features instantly and provide feedback. I created an *Endpoint Alias* once and share the fixed DNS with design partners. When a certain feature needs review, I pointed this Endpoint Alias to the endpoint of the sandbox with the work-in-progress change. ### Create endpoint alias An *Endpoint Alias* can be created from Web Console (on the left, select `Resources` and `Endpoints`). Click *Create Endpoint Alias* card, provide a name and select an endpoint from a sandbox. It can also be created using the CLI:$ ```shell $ cs endpoint-alias create demo sandbox-foo app ``` Please checkout the [CLI Document](https://docs.sandboxes.cloud/docs/command-line-tool#endpoint-alias) for more details. ### Common questions regarding endpoint alias #### What happens if a sandbox is deleted? The *Endpoint Alias* becomes *Unassigned*. Click the *Assign* button on the card to assign a different endpoint. #### What happens if an endpoint is deleted or renamed? Same as the sandbox deletion, the *Endpoint Alias* becomes *Unassigned*. #### Does Endpoint Alias Support Authentication? An *Endpoint Alias* is a name only, all features come from the actual endpoint. If the target endpoint is authenticated, then the *Endpoint Alias* supports authentication. --- # Environment variables (ENV) This page describes how to use environment variables in Crafting sandbox for your development needs. The outline is as follows: * [Types of environment variable definitions in sandbox](#types-of-environment-variable-definitions-in-sandbox) * [Built-in environment variables](#built-in-environment-variables) * [Sandbox-level environment variables](#sandbox-level-environment-variables) * [Workspace-level environment variables](#workspace-level-environment-variables) * [Environment variables for Repo Manifest](#environment-variables-for-repo-manifest) * [User-defined environment for interactive shells](#user-defined-environment-for-interactive-shells) * [Use Secret in Environment Variables](#use-secret-in-environment-variables) * [How do environment variables take effect](#how-do-environment-variables-take-effect) * [Merge of environment variables](#merge-of-environment-variables) * [Override environment variables at sandbox creation](#override-environment-variables-at-sandbox-creation) * [When changes are applied to the sandbox](#when-changes-are-applied-to-the-sandbox) * [Admin guide for environment variables](#admin-guide-for-environment-variables) * [Use environment variables for service linking](#use-environment-variables-for-service-linking) * [Use Secret to store sensitive information which are used to stored in ENV](#use-secret-to-store-sensitive-information-which-are-used-to-stored-in-env) * [How to use direnv](#how-to-use-direnv) * [How to use dotenv package for Node.js](#how-to-use-dotenv-package-for-nodejs) ## Types of environment variable definitions in sandbox Crafting platform supports multiple tiers of environment variables injection/customization in workspaces: * Built-in environment variables: injected by default for all processes in the workspace; * Sandbox-level environment variables; * Workspace-level environment variables; * User-defined environment for hooks, daemons, jobs in each [checkout](https://docs.sandboxes.cloud/docs/sandbox-definition#checkouts); * User-defined environment for interactive shells. #### Built-in environment variables The following environment variables are injected into workspaces by default:
Variable Value Description Value Example
`SANDBOX_SYSTEM_URL` The base URL to access the {user.productName} system. The URL is ` `https://sandboxes.cloud`
`SANDBOX_SYSTEM_DOMAIN` The domain part of `SANDBOX_SYSTEM_URL` `sandboxes.cloud`
`SANDBOX_SYSTEM_DNS_SUFFIX` The suffix for constructing DNS names after `SANDBOX_SYSTEM_DOMAIN` `.sandboxes.cloud`
`SANDBOX_ORG` The name of the current organization. `crafting`
`SANDBOX_ORG_ID` The ID of the current organization.
`SANDBOX_NAME` The name of the current Sandbox. `mysandbox`
`SANDBOX_ID` The ID of the current Sandbox.
`SANDBOX_APP` The name of the Template that the Sandbox is created from. It's only available when the Sandbox is created from a Template. `crafting-backend-dev`
`SANDBOX_WORKSPACE` The name of the current workspace. `api`
`SANDBOX_OWNER_ID` The ID of the Sandbox owner, if available.
`SANDBOX_OWNER_EMAIL` The email of the Sandbox owner, if available. `demo@crafting.dev`
`SANDBOX_OWNER_NAME` The display name of the Sandbox owner, if available.
`SANDBOX_APP_DOMAIN` The Internet facing DNS domain of the Sandbox. Often, it has the format `${SANDBOX_NAME}-${SANDBOX_ORG}.sandboxes.run` `mysandbox-org.sandboxes.run`
`SANDBOX_ENDPOINT_DNS_SUFFIX` The suffix for Internet facing DNS names of endpoints. The complete DNS name of an endpoint can be constructed using `${ENDPOINT_NAME}${SANDBOX_ENDPOINT_DNS_SUFFIX}` `--mysandbox-org.sandboxes.run`
`SANDBOX_JOB_ID` The job ID, if the sandbox is created for a job
`SANDBOX_JOB_EXEC_ID` The job execution ID, if the sandbox is created for a job
`SANDBOX_POOL_ID` The sandbox Pool ID if the sandbox is currently in the pool
`__SERVICE_HOST`\ `__SERVICE_PORT`\ `*_SERVICE_PORT_` Service linking environment variables. See [Use environment variables for service linking](#use-environment-variables-for-service-linking) below. `MYSQL_SERVICE_HOST=mysql`\ `MYSQL_SERVICE_PORT=3306`\ `MYSQL_SERVICE_PORT_MYSQL=3306`
The built-in environment variables can be used by the code running in the sandbox to choose to use specific config for the sandbox environment. #### Sandbox-level environment variables These environment variables are defined in Template as part of [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition) and apply to all the workspaces in the sandbox, affecting shell, IDE, hooks, and daemons/jobs defined in [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest). They are defined in the top-level `env` section in [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition): ```yaml # These environment variables applies to all workspaces. env: - DEV_ENV=development - APP_URL=https://app${SANDBOX_ENDPOINT_DNS_SUFFIX} # Expansion is supported ``` #### Workspace-level environment variables Environment variables can be defined in the `env` section of a workspace in in [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition), which only applies to the workspace. For example: ```yaml # These environment variables applies to all workspaces. env: - DEV_ENV=development - APP_URL=https://app${SANDBOX_ENDPOINT_DNS_SUFFIX} # Expansion is supported workspaces: - name: frontend # These environment variables applies to the workspace only env: - EXTERNAL_URL=${APP_URL} ``` #### Environment variables for Repo Manifest [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest) defines hook scripts, daemons, and jobs per [checkout](https://docs.sandboxes.cloud/docs/sandbox-definition#checkouts). In the manifest, environment variables can be defined to be shared by all hook scripts, daemons, and jobs or for individual commands/scripts. Environment variable expansion is supported in both cases. Here's an example: ```yaml env: # Environment variables shared by all hooks, daemons and jobs. - EXTERNAL_ENDPOINT_NAME=app - EXTERNAL_URL=https://${EXTERNAL_ENDPOINT_NAME}${SANDBOX_ENDPOINT_DNS_SUFFIX} hooks: build: cmd: | ./scripts/build.sh ./scripts/seed-db.sh env: - 'DB_SERVER_ADDR=${DB_SERVICE_HOST}:${DB_SERVICE_PORT}' - 'APP_URL=${EXTERNAL_URL}' daemons: server: run: cmd: './scripts/server.sh --app-url=${EXTERNAL_URL}' jobs: post: run: cmd: './scripts/post.sh $EXTERNAL_URL' schedule: "*/10 * * * *" ``` The `env` section on the top defines the environment variables shared by all the commands defined in the manifest. See [Shared Environment](https://docs.sandboxes.cloud/docs/repo-manifest#shared-environment) for more details. `hooks.build.env` defines the environment variables used by the `build` hook only. See [Run Schema](https://docs.sandboxes.cloud/docs/repo-manifest#run-schema) for more details. **Note:** The environment variables defined in the [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest) are *ONLY* effective for the commands defined in the manifest. I.e., they are not present in interactive shells such as SSH session or Web IDE session. **For that reason, we recommend to use[Workspace-level environment variables](#workspace-level-environment-variables) in most cases if possible**. #### Environments in Shell Scripts > 🚧 Environments Undefined in Shell Scripts > > As a common problem, an environment is well-defined when using SSH to access my workspace, but this environment is undefined in my post-checkout, build scripts, neither in the daemon scripts. Most likely, this is caused by the default `.bashrc` file in the base snapshot which is built from commonly used Linux distributions (like Ubuntu). The file contains the following at the beginning: ```shell Shell # If not running interactively, don't do anything case $- in *i*) ;; *) return ;; esac ``` When using SSH, the bash runs in *interactive* mode (unless given special flags), and thus the whole `.bashrc` file is loaded as expected. However, most of the automation/background scripts (like post-checkout, build hooks, daemons etc) ran by bash in *non-interactive* mode, as a result, the content in the `.bashrc` file is skipped by the a few lines described above. As the installation procedure of many tools are appending environment variables (like `PATH`) to `.bashrc`, they are *NOT* effective in background scripts, but works normally in SSH sessions. **Suggestions** * Explicitly define important environment variables in the Template or Sandbox definition, at sandbox-level, workspace-level or in the [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest); * Craft your own `.bashrc` file in the base snapshot `/etc/skel/.bashrc` or `/etc/skel.sandbox/.bashrc` to make it consistent between *interactive* and *non-interactive* modes. ### Use Secret in Environment Variables The content of a *shared* secret can be extracted into the value of an environment, with prefixing and suffixing whitespaces trimmed. For example: ```yaml env: - MY_APP_KEY=key:${secret:app-key} ``` The form `${secret:SECRET-NAME}` can be used to extract the content of the secret into the value. Only organizational secret can be referenced. ### How do environment variables take effect #### Merge of environment variables Environment variables are defined in different places for different scopes, and they are merged to generate the final set of environment variables in the following order: * Built-in environment variables * Sandbox-level environment variables * Workspace-level environment variables * For hooks, daemons, and jobs in Repo manifest only * Env defined in top-level `env` section of repo manifest * Env defined per hook/daemon/job The expansion is evaluated immediately when an environment variable is appended to the merging process. Given the following example of a Sandbox Definition: ```yaml # These environment variables applies to all workspaces. env: - DEV_ENV=development - APP_URL=https://app${SANDBOX_ENDPOINT_DNS_SUFFIX} # Expansion is supported workspaces: - name: frontend # These environment variables applies to the workspace only env: - EXTERNAL_URL=${APP_URL} - APP_URL=https://test ``` The final environment variables in a shell of the `frontend` workspace contains (built-in environment variables not listed here): ```shell DEV_ENV=development EXTERNAL_URL=https://app--mysandbox-org.sandboxes.run APP_URL=https://test ``` When `EXTERNAL_URL` is appended, expansion is evaluated immediately, and at that time, `APP_URL` is `.\ The last `APP_URL= overrides the existing `APP_URL`. #### Override environment variables at sandbox creation At sandbox creation time, the creator can further adjust environment variable setting for the sandbox. As shown above, the create can add more ENV definitions to sandbox-level and workspace level. Here, new ENV definitions can be appended at the bottom of existing definitions from the template. The new ENV definitions can expand from the existing definitions, and can re-define the ENV already in the existing definitions. #### When changes are applied to the sandbox The environment variables defined above are effective once the workspace is created in a sandbox. However, there may be further changes on the sandbox after creation (e.g. synchronized from a changed Template). Moreover, the change may cause differences in environment variables (e.g. adding workspaces/dependencies affects service linking environment variables, adding/removing packages affects `PATH`). The new changes won't be populated to all existing processes, including WebIDE servers and VS Code remote servers if they are running.\ New processes after the change will pick up new values. Change of [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest) is only effective the next time a command is executed. The running daemons stay with the old environment. Use `cs restart` to restart daemons to use the new environment. ### Admin guide for environment variables Following best practices are suggestions to team admins for manage environment variables in their team's development environments. They are advanced topics, some of which require further setup in the Template. See [Setup Templates for Dev Environments](https://docs.sandboxes.cloud/docs/templates-setup) > 🚧 Do not quote values in YAML > > When defining environment in sandbox definition, repo manifest etc. Do not put quotes around the value, otherwise the quotes become part of the value. #### Use environment variables for service linking *Service linking* (aka *service injection*) is one of the standard service discovery mechanisms that works by injecting environment variables into the container where the service runs in order to discover and communicate with other services. The environment variable name is constructed using the following rules: * `_SERVICE_HOST` specifies the address or hostname of the service. * `_SERVICE_PORT` specifies the port number of the default port of the service (the first exposed port according to [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition)). * `_SERVICE_PORT_`: specifies the port number of each exposed port. * Dashes `-` in `` and `` are converted to underscores `_`. Take the below example [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition): ```yaml workspaces: - name: frontend ports: - name: http port: 3000 protocol: HTTP/TCP - name: backend ports: - name: api port: 8080 protocol: HTTP/TCP - name: metrics port: 9090 protocol: HTTP/TCP dependencies: - name: db service_type: mysql - name: redis service_type: redis ``` It will inject the following environment variables in each workspace (both `frontend` and `backend`): * `FRONTEND_SERVICE_HOST=frontend` * `FRONTEND_SERVICE_PORT=3000` * `FRONTEND_SERVICE_PORT_HTTP=3000` * `BACKEND_SERVICE_HOST=backend` * `BACKEND_SERVICE_PORT=8080` * `BACKEND_SERVICE_PORT_API=8080` * `BACKEND_SERVICE_PORT_METRICS=9090` * `DB_SERVICE_HOST=db` * `DB_SERVICE_PORT=3306` * `DB_SERVICE_PORT_MYSQL=3306` * `REDIS_SERVICE_HOST=redis` * `REDIS_SERVICE_PORT=6379` * `REDIS_SERVICE_PORT_REDIS=6379` #### Use Secret to store sensitive information which are used to stored in ENV To conveniently provide environment overrides with sensitive information, place a simple shell script, assigning environment variables into a secret. For example, to create a secret `db-env` that contains a script defining database access credentials: ```shell cat < # Frequently Asked Questions ## Who can access my sandbox/workspace? Can I access my teammate's sandbox/workspace? Sandbox/workspace is accessible to all team members in the same organization. We optimize for convenience and promotes transparency within the team. To access a teammate’s sandbox, it's very similar to accessing the user's own sandbox, i.e., a user can: * Use Web Console, find the sandbox from [Sandboxes page](https://sandboxes.cloud/sandboxes) and access it * Use Command Line tool, `cs sandbox` related commands for all sandboxes in the Organization ## Who can access the endpoints of sandboxes? By default, the exposed endpoints of sandboxes are protected by same login. So the User from an Organization need to login the same way to access the endpoints. However, recognizing that many API endpoints supports its own authentication mechanism, we allow users to specify the endpoints to be `Public` and only rely on the App's own login mechanism just like in production environment. Making endpoints `Public` is also helpful for demoing the App to external people. ## Will sandbox be automatically recycled? Will my code / data lost in sandbox? Sandbox is designed to support permanent usage. To save resources, it can be `suspended`. Even when suspended, its configuration and disk volume will still be kept so that it can be `resumed` for operation. So as long as modification is saved to sandbox mounted disk volume, it will not be lost. Same goes for the persisted data in dependency services such as MySQL. However, a user can delete a sandbox when it’s no longer useful. In such case, all the data local to the sandbox will be destroyed. We recommend to backup the work often to git repo hosted by vendors. ## Should I use one or multiple sandbox? Our recommendation is to use one sandbox for one purpose to make best use of it. You should not feel the need to limit yourself to only one sandbox and keep multiple context in it, although using one main sandbox for most active development need can provide more convenience in customizing it to your preference. --- # Git Access Access the git repos is a necessary step for developers to use Crafting as their dev environments. This page talks about how to set up the git repo access from Crafting sandbox. ## General Git Access via SSH Public Keys Crafting setup a secure key pair for each user on its platform. It would use the private key in this key pair to authenticate and checkout code securely. You can add the public key in this key pair to your git repo to allow Crafting to access it on your behalf. Each user can go to menu item `Connect -> Git` on web console (or [here](https://sandboxes.cloud/git) for Crafting SaaS) to see the public key. ![Git menu](https://files.readme.io/ac871d7-image.png) Clicking one of the buttons on highlighted above to copy the public key and go to the git repo host site (e.g. GitHub) to paste it there. This method is the most generic one, also supports private git repos, but **it requires each user to set up the access as they onboard to Crafting** ## GitHub App Integration Crafting also supports a more convenient GitHub app integration that **doesn't require each user to have separate setup**. The admin with `Organization Owner` role on GitHub account can install Crafting GitHub app and select repos to grant access to Crafting. With GitHub app, all users from the Crafting sandboxes access the repos as the Crafting GitHub app, therefore no separate per-user setup is required. To connect GitHub app, go to the menu item `Connect -> GitHub` on web console (or [here](https://sandboxes.cloud/github) for Crafting SaaS), read the instructions, and click `Install`. ![GitHub app menu](https://files.readme.io/21d688b-image.png) **Note that only the user with`Organization Owner` on the GitHub side can finish the flow.** Keep in mind that Crafting SaaS supports GitHub app directly and can be self-served. For Crafting Self-hosted, it requires extra setup, please [contact us](https://crafting.dev/contact) for more information. ## Git Protocol Remap When git submodules are used, the submodules can be referenced using SSH protocol (`git@...`) or HTTPS (`https://...`) and this may cause checkout failure in sandboxes if the protocol is different from the git access configured on the system. The resolution will be putting a file `/etc/gitconfig` in the base snapshot to map the protocol one to another. If git access is configured using SSH protocol, the content of `/etc/gitconfig` should be (use `github.com` as an example, for other hosts, please modify accordingly, please also replace `ORG`): ```ini [url "git@github.com:ORG/"] insteadOf = "https://github.com/ORG/" ``` If git access is configured with GitHub App Integration (using HTTPS protocol), the content of `/etc/gitconfig` should be ```ini [url "https://github.com/ORG/"] insteadOf = "git@github.com:ORG/" ``` --- # Git Service Integration for Preview In the [Git Access](https://docs.sandboxes.cloud/docs/git-access) section, we talked about how to let individual developers access git repos for their day-to-day development. Here in this section, we will talk about how to integrate Crafting further into the DevOps workflow to automatically generate preview environment based on each Pull Request. Specifically, we cover: * [Launch a Sandbox from URL Posted to Pull Request](#launch-a-sandbox-from-url-posted-to-pull-request) * [Launch a Sandbox Automatically as part of CI process](#launch-a-sandbox-automatically-as-part-of-ci-process) * [Launch a Sandbox using Github Action](#launch-a-sandbox-using-github-action) ## Launch a Sandbox from URL Posted to Pull Request A good practice with Crafting Sandbox is for the CI tool to automatically post a URL to each Pull Request. When a developer clicks such a URL, the system will launch a sandbox with the code branch referred in the Pull Request as a preview environment. ### How to use **It is recommend to use the[dedicated Github Action](https://github.com/marketplace/actions/sandbox-launch-action) if you only need a URL in Github PR thread**. Please proceed the below approach if the Github Action does not meet your requirement or you need a highly customised behaviour. To do that, you can modify your CI tool to automatically post a specific URL to each Pull Request. This can be done via your internal CI automation tool (e.g. Jenkins), or by automation tools provided by git repo (e.g., Github action). Just use these tools to post a comment with a specially constructed URL for Crafting, where specific configurations can be put into HTTP query parameters. Below is an example where the sandbox is provisioned automatically where workspace `frontend` will be set to auto mode, which keeps track of updates in the upstream Git repository. ```http https://sandboxes.cloud/create?template=frontend&ws_frontend_co_src_version=master&ws_frontend_mode=auto&dep_mysql_snapshot=mysql-snapshot&autolaunch=true ``` For the above URLs, query parameters consists of: | Breakdown | Descripiton | | :---------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------- | | template=*frontend* | `frontend` template will be used for the target sandbox. | | ws\_**frontend**\_co\_**src**\_version=*master* | For the target sandbox, the checkout `src` in workspace `frontend` will use `master` branch as an override. | | ws\_**frontend**\_mode=*auto* | For the target sandbox, workspace `frontend` will be set to auto mode, which keeps track of updates in the upstream Git repository. | | dep\_**mysql**\_snapshot=*mysql-snapshot* | Dependency `mysql` will use a snapshot called `mysql-snapshot`. | | autolaunch=*true* | The target sandbox will be automatically launched. | | env\_**NODE\_ENV**=*production* | The target sandbox will apply an environment variable `NODE_ENV=production` | In case of a simple workspace setup, the below URL can also to be used to launch a sandbox: ```Text HTTP https://sandboxes.cloud/create?template=frontend&repo=orgname/reponame&version_spec=develop&mode=auto&autolaunch=true ``` For the above URLs, query parameters consists of: | Breakdown | Description | | :---------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | template=*frontend* | `frontend` template will be used for the target sandbox. | | repo=*orgname/reponame* | If the workspace's checkout matches the repo, the target workspace will be selected as further customisation target, such as `version_spec`, `mode` and `autolaunch`. | | mode=*auto* | If the workspace's checkout matches the repo, the auto follow mode is turned on. | | version\_spec=*develop* | If the workspace's checkout matches the repo, `develop` will be used as the override. | | autolaunch=*true* | The target sandbox will be automatically launched. | ### Supported query parameters An overview of query parameters and their descriptions are as follows: | query parameter | description | | :----------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | template | The template name for target sandbox, if not provided, the first template will be used. | | flavor | The flavor of the template, if not provided, the default flavor will be used. | | sandbox\_name | The name for the target sandbox, if not provided, a name prefixed with the username will be populated. If an invalid name is specified, the sandbox might not be created automatically regardless of the value of *autolaunch* | | ws\_WORKSPACE\_co\_CHECKOUT\_version | WORKSPACE is the name of target workspace, CHECKOUT represents the path of corresponding checkout in the workspace. The accepted value is *branch name*, *commit number*, etc. | | ws\_WORKSPACE\_mode | WORKSPACE represents the name of workspace in the target app. | | dep\_DEPENDENCY\_snapshot | DEPENDENCY represents the name of dependency workload in the target app. The accepted value is the name of a snapshot of which the service type is also aligned with target app. | | container\_CONTAINER\_snapshot | CONTAINER represents the name of the container workload in the target app. The accepted value is the name of a snapshot that is for container and strictly matches the volumes information. | | autolaunch | Accepted values are true or false. The sandbox will be automatically launched if this is set to true, otherwise, sandbox creation page will only be populated with other provided values. You can omit this flag if you would like to allow another chance to review the settings before launching. | | env\_ENV\_NAME | Inject an environment variable to the launching sandbox. | | repo | This `repo` query parameter can be used to simplify the setup. Instead of use ws\_WORKSPACE\_\* prefixed query parameters, this `repo` query parameter can be used to match any workspace\checkout that uses the same repository. This query parameter are normally used together with `version_spec` and `mode` query parameters. | | version\_spec | Similar to ws\_WORKSPACE\_co\_CHECKOUT\_version, this version\_spec can be used to specify the version spec for the matched workspaces/checkouts by `repo` query parameter. It must be used together with `repo`. | | mode | Similar to ws\_WORKSPACE\_mode, this mode can be used to indicate the auto follow mode for the matched workspace by the `repo` query parameter. It must be used together with `repo`. | ## Launch a Sandbox Automatically as Part of CI Process Crafting also supports sandboxes to be launched automatically as a Pull Request is generated, without any developer action. To achieve this, following setup is required: 1. Hooks triggered by Pull Request: Your CI pipeline should already have hooks that gets run for each PR, from there, you need to extract which repo's which branch needs to be setup for the preview, and other settings. 2. Script to run Crafting CLI to create sandbox: You need to write a script to run Crafting CLI, which can create a sandbox for a very specific configuration. For allowing your CI tool to run Crafting CLI, `cs`, we suggest using [Service Account](https://docs.sandboxes.cloud/docs/account-setup#service-account-and-login-token), which supports a token-based login that doesn't require browser. With that, the Crafting CLI can be integrated into any workflow you want. To create sandbox using CLI with a specific configuration, please check [CLI reference](command-line-tool#sandbox). The basic command is something like this: ```shell $ cs sandbox create NAME -a TEMPLATE-NAME -D WORKSPACE-NAME/checkout[PATH].version=BRANCH-NAME ``` For example, to create a preview sandbox named `demo-preview-1` based on the `demo` template and use the branch `preview1` for the workspace `dev` checkout `src/demo`. The following command is used. ```shell $ cs sandbox create demo-preview-1 -a demo -D 'dev/checkout[src/demo].version=preview1' ``` Note that launching a sandbox for every single Pull Request may consume a lot of computational resources. We recommend to set some special flags or naming conventions for launching sandbox without user action. ## Launch a Sandbox using Github Action Please refer to the [action page in Github marketplace](https://github.com/marketplace/actions/sandbox-launch-action). --- # Home screen message and sandbox instruction Source: https://docs.sandboxes.cloud/docs/home-screen-message-and-sandbox-instruction.md As each engineering team has their own workflows and practices, they typically have some documentation on how each developer follows the workflows and practices. As a development platform, Crafting often has its own user guide somewhere in teams wiki or notion pages. Even though it's good to have out-of-band comprehensive documentation, it's better to have some key custom messages inline for every developer to keep in mind. Crafting provides this type messaging support on two levels: *Home screen message* and *Sandbox instructions*. Both of them use *Markdown* syntax and it's easy for the dev environment admin to edit. In this page, we talk about how to use them. ## Home screen message The home screen message is shown on the home page of Crafting web console (dashboard) for an organization to broadcast to every member of the team visiting the platform. Given it's on the first page a developer sees before doing anything on the platform, the following information is often put there: * How to set up and onboard a developer to Crafting, e.g. git access, personal customization, dev credentials, etc., and links to detailed instructions. * Best practices for the developers. * Brief information on which template is for what purpose. You can edit the message right on the Home page by clicking the `Customize` button on the top right corner. Only *Admins* can edit the message.
Image
**Overview** is a markdown snippet associated with either an org or a template, each serving a different purpose. Additionally, you can inject runtime variables with double curly brackets. An overview of an organization and an app support different variables, and an unknown variable will be parsed as an empty string if referenced. ## Sandbox instructions The `Sandbox instruction` is used to provide information on how sandboxes from a particular template should be used and offer short-cuts for developers to use the sandbox. It's defined in the template, and the instruction will be rendered on the sandbox page for all sandboxes created from that template.
Image
Because you can inject runtime variables with double curly brackets into the sandbox instructions, it can be very useful to directly guide developers to the places according to the runtime information, e.g. URL, hostname, etc. For example, the above message came from the following mark down: ```markdown This is a sandbox for demo purposes. Visit [this URL]({{endpoints.app.url}}) to see it. ``` *Sandbox instructions* can be customized by open a Template and click the **SANDBOX INSTRUCTIONS** button:
Image Template Details
### Variables and Syntax For the full details about the variables and syntax, please read [Overview in Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition#overview). The following is a list of commonly used variables and whether they are supported in org overview or sandbox overview: | Variable Name | Description | Org | Sandbox | | :---------------------------- | :-------------------------------------------------------------------------------------------------------------------- | :-------- | :-------- | | org.name | The name of current org. | Supported | Supported | | user.email | Email of current user. | Supported | Supported | | sandbox.name | Name of the current sandbox, if applicable. | | Supported | | sandbox.createdAt | Sandbox creation time, if applicable. | | Supported | | sandbox.updatedAt | The last updated time of the current sandbox, if applicable. | | Supported | | sandbox.template | The associated template's name of the current sandbox, if applicable. | | Supported | | sandbox.owner | The owner of current sandbox, if applicable. | | Supported | | endpoints.\[endpoint-name].url | The full URL of an endpoint. If there is no endpoint named as *endpoint-name*, the variable is deemed as unknown one. | | Supported | | endpoints.\[endpoint-name].dns | The DNS part of an endpoint. If there is no endpoint named as *endpoint-name*, the variable is deemed as unknown one. | | Supported | ## Example ```markdown Sandbox Notes Sandbox name {{sandbox.name}} Last updated {{sandbox.updatedAt}} Owner {{sandbox.owner}} Template {{sandbox.template}} For unknown variable, we display {{unknown}} ``` An example rendered result will be as below: ```text Sandbox Notes Sandbox name sandbox-name Last updated 2022-01-01 Owner sandbox-user Template example-template For unknown variable, we display ``` --- # Develop on Kubernetes Source: https://docs.sandboxes.cloud/docs/kubernetes-dev.md In this page, we describes how to use Crafting platform to develop on Kubernetes apps, including following topics: * [Use on-demand per-dev Kubernetes namespace with sandbox](#use-on-demand-per-dev-kubernetes-namespace) * [Intercept traffic from Kubernetes to instantly iterate on your code](#intercept-traffic-from-kubernetes) * [Conditional interception to use a large shared Kubernetes namespace for testing your services in isolation](#conditional-interception) * [Demo video](#demo-video) Here we assume your team has already setup Crafting for Kubernetes properly, and talk about the day-to-day usage from a developer point of view. To learn how to set up, please see [Setup for Kubernetes](https://docs.sandboxes.cloud/docs/kubernetes-setup) for details. Basically the team should setup in advance of following things: * Connect Crafting platform to a shared Kubernetes cluster for hosting the namespaces * \[optional] Setup direct Kubernetes API server access to run `kubectl` in sandbox * \[optional] Setup the template that include a Kubernetes resource for per-dev namespace ## Use on-demand per-dev Kubernetes namespace Crafting enables each developer to launch a dedicated namespace with services running end-to-end in it to support their development and testing work. In this model, each sandbox is launched with a dedicated namespaces along side. The workspaces in the sandbox can directly access services running in the namespace. Developers using different sandboxes will have their own namespaces in Kubernetes. More importantly, the life-cycle of the Kubernetes namespaces are managed by the sandbox so that: * When the sandbox is created, the namespace is created, * When the sandbox is deleted, the namespace is deleted as well, freeing up resources. * When the sandbox is suspended/resumed, the number of replicas for the services in the namespace can be scaled down to 0/scaled back, respectively, further improving resource utilization. The mechanism is to leverage Crafting's resource model, define a specially configured `Kubernetes resource` to implement the namespace management, and include that in the sandbox template. Read [here](kubernetes-setup#orchestrate-deployment-of-per-dev-namespace-from-sandbox) for more details about setup. The following guide assumes that setup is already done by the team. Shown in the figure above, this sandbox has a workspace and a Kubernetes resource already setup. After launch, the sandbox is created and the corresponding hook to launch the Kubernetes namespace is executed in the workspace (e.g. `dev`). We can monitor the creation of the namespace from the `Kubernetes Clusters` menu, by selecting the connected cluster and namespace (Note that it may take a few refreshes to have the newly created namespace listed). We can see the new namespace is created and the services are getting started in it. When the sandbox launching is done, we can see all services are ready and we can also see them from the workspace with `kubectl` command At this point, we have a dedicated Kubernetes environment for testing and debugging. We can run the product flow end-to-end by hitting the Kubernetes Ingress (or Load Balancer Service), or through the endpoint in sandbox, which can be setup to hit any service in the Kubernetes namespace, as shown below. When the sandbox is suspended (or auto-suspended), the corresponding namespace will be scaled to zero, as shown below, and upon resuming, it will be scaled back quickly. ### Intercept traffic from Kubernetes With Crafting, a developer can replace a service running in a Kubernetes cluster with the dev version running in the Crafting sandbox to develop business logic end-to-end. This is great for quickly iterating the code without rebuilding container every time. You can see your change live instantly. Crafting does that via traffic interception on a Kubernetes workload. As shown in the figure above, `Developer A` intercepts the `API service`, so that the traffic hitting the `API service` inside the cluster will be rerouted to the `API service` running in `Crafting Sandbox 1`'s workspace. That way `Developer A` directly modify the code in the sandbox and rebuild/restart service in the workspace, and then the new version will directly receive traffic from the cluster. When interception is active, the modified service can also call to other services in the cluster directly using the in-cluster DNS names or Pod/Service IP addresses (such as `backend`). If there is any callback from other services to the `API service`, the dev version will receive it. In summary, it's virtually plugged in and replaced the `API service` in the cluster, effective in an end-to-end flow. Crafting allows multiple interceptions to be done to the same or different sandbox for an integration testing. In the above figure, `Developer B` intercepts the `Cart service` at the same time, so the `Cart service` in `Crafting Sandbox 2` is the effective `Cart service` used in the cluster. That way, `Developer A` and `Developer B` can let their dev version of the service working in the same product flow together for integration testing. Crafting also supports conditional interception for only intercepting specific traffic stream into the services to avoid developers interfering each other. Please see [conditional interception](#conditional-interception) for more information. > 📘 Did you know? > > Traffic interception can be done in any Kubernetes namespaces in the connected cluster, not limited to the per-dev namespace described in [Use on-demand per-dev Kubernetes namespace](#use-on-demand-per-dev-kubernetes-namespace). It can also be used for debugging any Kubernetes deployment in the cluster (e.g. shared staging environment). To start an interception, we can go to the namespace under `Kubernetes Cluster` menu item, and click the `Start interception` for the target service (highlighted below). In the dialog, select the sandbox and workspace we want to intercept the traffic to and select the source and destination port. After clicking `Confirm`, the traffic interception is established. At this time, if we go to the sandbox page, we can see an active interception. Here we can see the detailed information regarding the interception and can stop the interception. Note that we can also start an interception from the sandbox page by clicking the `Start interception` in the `Actions` We can also start or stop Kubernetes interception using CLI ```shell $ cs kubernetes intercept [start|stop] ``` #### Conditional interception Crafting supports `Conditional traffic interception`, i.e., only intercepting the requests that match specific headers from Kubernetes to sandbox. With conditional interception, Many developers can use a shared Kubernetes deployment as the base environment, and conditionally intercept their own testing traffic to hit the dev version of the service running in their sandbox, without letting their dev version of the service interfere with the rest. As shown above, only the traffic from Developer A (Green arrow) is subject to the interception to replace `API-Service`, while traffic from Developer B (Yellow arrow) is not. Similarly, only the traffic from Developer B (Yellow arrow) is intercepted for service `Cart`. That way, each developer can test their dev version of the service using a shared environment without affecting each other. This is especially useful when building a per-dev namespace to include all the services is not economical. Specifically, we recommend using conditional routing in either of the following cases: * For a large Kubernetes environment with hundreds or thousands of services * For a large engineering team with hundreds or thousands of developers. Conditional interception depends on headers in the request to identify which traffic stream it belongs to, and decide whether to reroute it. Therefore, the services themselves need to support header propagation so that the traffic headers can be passed along. Please see the [setup guide](https://docs.sandboxes.cloud/docs/kubernetes-setup#use-sandbox-endpoint-for-conditional-interception) regarding how to set it up. To start a conditional interception, after opening the interception dialog and selecting workloads, ports, etc., uncheck the `Intercept all traffic to the sandbox` as shown below. After clicking `NEXT`, you can get to the conditional routing dialog, where you can add endpoints in your sandbox which will inject headers that can be propagated by common tracing libraries for proper conditional interception. For example here, we can add an endpoint named `test` and select the `frontend` service in our target namespace as entry point (as shown below). After setting the endpoint, we click `START` to start the interception. As shown in the above interception status, now the endpoint `test` is added to send traffic with special headers to the `frontend` service. And *only* traffic with these headers will be routed to the dev version of the checkout service running in the sandbox. You can start testing your dev version of the checkout service with context of the target Kubernetes namespace without worrying about interfering other developers using the same Kubernetes namespace for their testing. ## Demo Video A demo video for how to develop Kubernetes on Crafting can be found [here](https://youtu.be/J8LbuUVP_Do) --- # Setup for Kubernetes Source: https://docs.sandboxes.cloud/docs/kubernetes-setup.md This section will walk you through how to setup Crafting for empowering your developers with the simplest experience to boost the productivity on day-to-day Kubernetes related development tasks. For the user guide on how a developer to use this setup to develop on Kubernetes, please see [Develop on Kubernetes](https://docs.sandboxes.cloud/docs/kubernetes-dev). Specifically, the outline of this page: * [How Crafting Works with Kubernetes](#how-crafting-works-with-kubernetes) * [Setup Guide](#setup-guide) * [Connect a Kubernetes cluster](#connect-a-kubernetes-cluster) * [Setup Kubernetes access in the sandboxes](#setup-kubernetes-access-in-the-sandboxes) * [Orchestrate deployment of per-dev namespace from sandbox](#orchestrate-deployment-of-per-dev-namespace-from-sandbox) * [Access Kubernetes deployment using sandbox endpoint](#access-kubernetes-deployment-using-sandbox-endpoint) * [Use sandbox endpoint for conditional interception](#use-sandbox-endpoint-for-conditional-interception) * [Share with developers](#share-with-developers) * [Additional Information](#additional-information) * [Features](#features) * [Supported Kubernetes clusters](#supported-kubernetes-clusters) * [Setup video](#setup-video) ## How Crafting Works with Kubernetes Crafting Kubernetes Development Experience augments an existing Kubernetes cluster with rich development capabilities integrated with the Crafting system by installing the Crafting Kubernetes agent in the cluster. The agent takes care of communicating and collaborating with the Crafting system, regardless where your Kubernetes cluster is hosted (any location, any cloud provider), as long as it's able to connect to the Crafting system (including SaaS, self-hosted, and Express). ![K8s Debug](https://files.readme.io/3273f58-K8sDebug.png) ## Setup Guide ### Connect a Kubernetes cluster A Kubernetes cluster can be connected to the Crafting system using a single command. Before that: * Make sure your Kubernetes cluster is [supported](#supported-kubernetes-clusters); * Make sure you are in a terminal with `kubectl` and `helm` installed, and `kubectl` is able to access the Kubernetes cluster with full access (e.g. `cluster-admin` equivalent privilege); * The Crafting CLI `cs` is downloaded and installed, and has admin permission to access the Crafting system. Run the following command: ```shell shell $ cs infra kubernetes connect ``` The command prompts for a name of the cluster (it's a name used on the Crafting system side, not necessary to be the exact name of the cluster) and installs the `Crafting Kubernetes Agent` using `helm` in its own namespace and enables the following development capabilities: * Direct in-cluster network access (Pod IPs, Service IPs, Service DNS) from Crafting sandboxes; * Traffic interception and reroute (conditionally or unconditionally) incoming traffic to a Crafting sandbox; * Direct Kubernetes API server access from Crafting sandboxes (without additional access setup in the Crafting sandboxes and/or from the cloud provider). Please checkout [Features](#features) for the details. > 🚧 If Kubernetes NetworkPolicy is used > > Kubernetes NetworkPolicy may prevent the Crafting Kubernetes Agent from communicating with workloads deployed in the other namespaces, and some of the above features may not work properly. In a cluster for development, please refer to [Kubernetes NetworkPolicy](#kubernetes-networkpolicy) for how to enable communication between Crafting Kubernetes Agent and other workloads. The command will perform preflight checks and attempt auto-detection of Kubernetes in-cluster network setup. In most cases, it will be able to detect in-cluster CIDRs of Pod network and Service network. However, it may fail for some clusters, and it will ask you to enter the CIDRs. > 📘 AWS EKS Specific > > As EKS clusters are using VPC subnet CIDRs directly for Kubernetes Services, the `cs` command will not be able to detect the Service subnet. In this case, you can enter the full VPC subnet CIDR directly, or individual CIDRs of the subnets in the VPC. This information is required for Direct in-cluster network access to work properly. Once the agent is installed successfully, the cluster will show up from the Crafting Web Console: ![Connected K8s](https://files.readme.io/3919dbb-ConnectedK8s.png) ### Setup Kubernetes access in the sandboxes As the next step, setting up the `kubeconfig` file in the sandboxes allows the developers to access the cluster directly using `kubectl`. This is optional if sandboxes don't orchestrate an automated deployment in the cluster and the developers don't need to access the cluster directly. Features like direct in-cluster network access and traffic interception don't need Kubernetes access from sandboxes. #### Direct Kubernetes API server access If Direct Kubernetes API server access is enabled during agent installation, the following file can be used (as `~/.kube/config` or any file pointed by the environment variable `KUBECONFIG`, assuming the name of the cluster used during installation is `example`): ```yaml apiVersion: v1 clusters: - cluster: server: http://example.k8s.g.sandbox name: example contexts: - context: cluster: example name: example current-context: example ``` The DNS name `example.k8s.g.sandbox` is specially registered inside each of the Crafting sandbox, proxied to the Kubernetes API server on the cluster connected under the name `example`. There are a few ways to save this `kubeconfig` file * As a [Secret](https://docs.sandboxes.cloud/docs/secrets), and add the env `KUBECONFIG` to the sandbox template. For example, the shared organizational secret is named `example-kubeconfig` and then the env `KUBECONFIG=/run/sandbox/fs/secrets/shared/example-kubeconfig`; * Directly write to `~/.kube/config` and include that file in a [home snapshot](https://docs.sandboxes.cloud/docs/workspaces-setup#home-snapshots); * Use a setup script (`/etc/sandbox.d/setup` or `~/.sandbox/setup`) to generate `~/.kube/config` every time a sandbox starts. #### Through Cloud Provider This is the alternative way if Direct Kubernetes API server access is disabled or special control is needed via the cloud provider: * \[ ] Follow [Cloud Access Setup](https://docs.sandboxes.cloud/docs/cloud-resources-setup#access-setup) to make sure the developer can access the cloud provider from the sandbox using the corresponding CLI tools (e.g. `gcloud`, `aws` etc.); * \[ ] Generate the `kubeconfig` file according to the cloud provider. Similar to above, find a way to save and share the file with sandboxes. Specific to GKE, please follow [Cloud Access Setup](https://docs.sandboxes.cloud/docs/cloud-resources-setup#access-setup) and modify the generated `kubeconfig` file. #### Fine-grained Access Control On the Kubernetes clusters supporting external OIDC provider, the native Kubernetes RBAC can be used to control fine-grained access to Crafting users when accessing from the workspaces. ##### AWS EKS AWS EKS supports external OIDC providers. Add Crafting as one of the entry (the following example is using Terraform) to the cluster (make sure the EKS endpoint can be accessed from the Crafting workspaces): ```terraform variable "cluster_name" { description = "Name of the cluster" } variables "crafting_org" { description = "Org name in Crafting sandbox system" } resource "aws_eks_identity_provider_config" "crafting" { cluster_name = var.cluster_name oidc { client_id = var.crafting_org identity_provider_config_name = "crafting" issuer_url = "https://sandboxes.cloud" } } ``` In the Crafting workspaces, prepare a `kubeconfig` file with `user` like: ```yaml users: - name: crafting user: tokenFile: /run/sandbox/fs/metadata/owner/token ``` A full example if `kubeconfig` can be (the rest part can be the same as generated from `aws eks update-kubeconfig ...`): ```yaml apiVersion: v1 kind: Config clusters: - name: eks-cluster cluster: certificate-authority-data: server: https://EKS-ENDPOINT.eks.amazonaws.com contexts: - name: crafting context: cluster: eks-cluster user: crafting current-context: crafting users: - name: crafting user: tokenFile: /run/sandbox/fs/metadata/owner/token ``` Then create `RoleBinding` or `ClusterRoleBinding` in the Kubernetes for fine-grained access control. The *subject* should be `User` with name like `https://sandboxes.cloud#EMAIL`. For example: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: crafting-user-foo subjects: - kind: User name: 'https://sandboxes.cloud#foo@gmail.com' apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: bar-cluster-role apiGroup: rbac.authorization.k8s.io ``` ### Orchestrate deployment of per-dev namespace from sandbox By leveraging `resource` in a sandbox, a developer is able to have a namespace-isolated, per-sandbox deployment automatically and starts development without dealing with any Kubernetes related tasks. Define the following in a Sandbox Template: ```Text YAML env: - APP_NS=${SANDBOX_APP}-${SANDBOX_NAME}-${SANDBOX_ID} workspaces: - name: dev checkouts: - path: src ... ... resources: - name: kubernetes brief: The deployment in the Kubernetes cluster handlers: on_create: use_workspace: name: dev run: cmd: | # Create the namespace if not exists. kubectl create ns "$APP_NS" || true kubectl -n "$APP_NS" apply -f deploy/kubernetes.yaml dir: src on_delete: use_workspace: name: dev run: cmd: kubectl delete ns "$APP_NS" dir: src on_suspend: max_retries: 1 use_workspace: name: dev run: cmd: kubectl -n "$APP_NS" scale --replicas=0 --all deploy dir: src on_resume: use_workspace: name: dev run: cmd: kubectl -n "$APP_NS" scale --replicas=1 --all deploy dir: src ``` ### Access Kubernetes deployment using sandbox endpoint Per-sandbox deployment can be accessed using a sandbox Endpoint as it's more convenient and resource efficient compared to creating an Ingress or Load Balancer Service in the deployment (often these take long to provision, incur additional cost, and difficult to use - with direct IP or long, generated DNS, and no access control). A sandbox Endpoint can be easily clicked from the Web Console with additional access control feature which protects in-progress work from being accessed from public. First, add the Endpoint to the definition and use a workspace as its backend: ```yaml endpoints: - name: k8sapp http: routes: - path_prefix: / backend: target: dev # This references the following workspace port: k8s-forward # This reference the port in the workspace workspaces: - name: dev ports: - name: k8s-forward port: 8888 protocol: HTTP/TCP ... ``` With the above config, the Endpoint `k8sapp` is forwarding traffic to the workspace `dev` on port `8888`. Now add a daemon in the workspace to forward the traffic to a workload in the Kubernetes cluster, in the [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest): ```yaml daemons: k8s-forward: run: cmd: cs k8s forward 8888:appsvc.${APP_NS}:8888 ``` The above configuration leverages the env `APP_NS` as the namespace of the per-sandbox deployment. Where `appsvc` is the name of the Kubernetes Service as the entry of the deployment. The second port `8888` must be the port defined in the Service. > 📘 Why not kubectl port-forward? > > The above command can be replaced with `kubectl port-forward ...` however, the command `kubectl port-forward` is unstable and often gets disconnected for unknown reason. Once that happened, the whole command must be restarted before it can forward new connections. ### Use sandbox endpoint for conditional interception The [Traffic Intercept and Reroute](#traffic-intercept-and-reroute) provided by the Crafting Kubernetes agent supports conditional rerouting based on special HTTP headers. Once configured, only HTTP requests with specified headers are rerouted to sandboxes, thus allowing multiple developers to intercept the same deployment simultaneously without interfere with each other. As the conditional rerouting relies on special HTTP headers, it has the following requirements before the feature is able to work properly: * The application itself must support HTTP header propagation. For a specific service, the relationship between incoming and outgoing requests is determined by the application specific logic. The service must propagate special headers from incoming requests to the outgoing ones, so the complete end-to-end transaction of requests can be correctly rerouted; * The first incoming request must carry the designated special HTTP headers. The first requirement must be fulfilled by the application itself. Regarding the second one, the sandbox endpoint has the capability of injecting HTTP headers with special values. Please read [Develop on Kubernetes](https://docs.sandboxes.cloud/docs/kubernetes-dev#conditional-interception) for more details. Note that GPRC protocol uses HTTP/2, so it is also supported for conditional routing if headers are configured properly. ### Share with developers Once the setup is done, including the Template for sandboxes, related snapshots, share the Template with developers. The developers are able to have ready-to-use Kubernetes development environments by simply creating sandboxes from the template. ## Additional Information ### Features #### Direct in-cluster Network Access When the `Kubernetes Interception` is enabled on a sandbox, the full Kubernetes in-cluster can be accessed from the sandbox. For example, directly access a Pod using its IP address, resolving and accessing using a Kubernetes Service DNS. This helps the outgoing communication from an in-development service (launched from a workspace in the sandbox) to talk to other dependencies in the cluster using the same way (e.g. using Kubernetes Service DNS names) as in a production deployment. This feature doesn't require Kubernetes access from the sandbox. #### Traffic Intercept and Reroute The incoming traffic to any Pods in the cluster can be intercepted (either on HTTP level or TCP level) and rerouted to a workload in the sandbox. This helps quickly validate the change of a service without building and re-deploying it. For HTTP interception, the interception can run conditionally based on special HTTP headers and their values. This allows multiple developers intercepting the same deployed service and validate individual changes without conflict. #### Direct API Server Access This feature provides the convenience to provide developers access to the connected Kubernetes cluster without additional access setup for the cloud providers. The *Crafting Kubernetes Agent* proxies the connection from sandbox to the API server and automatically injects the access token of the preconfigured service account (bound to a ClusterRole, default is *cluster-admin*). On the sandbox side, the special DNS name `.k8s.g.sandbox` can be used to access the API server of the cluster connected under the name `CLUSTER-NAME`. ### Supported Kubernetes clusters The *Crafting Kubernetes Agent* requires the deployment of `privilged` pods in the cluster, and it will be able to perform operations on node level. Nodeless clusters (e.g. EKS based on Fargate profiles, Autpilot GKE clusters) are not supported. The minimum supported (tested) Kubernetes version is `1.21`. Old clusters or clusters running without `containerd` or docker-shim as the container runtime (CRI implementation) are not supported. ### Kubernetes NetworkPolicy When Kubernetes NetworkPolicy resources are deployed, it's likely the Crafting Kubernetes agent can't communicate with other workloads. The features like [Direct in-cluster Network Access](#direct-in-cluster-network-access), [Traffic Intercept and Reroute](#traffic-intercept-and-reroute) won't work properly. If that's the case, please apply the following *NetworkPolicy* to the target namespace: ```yaml YAML apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-crafting spec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: crafting-sandbox ``` ### Setup video The steps to setup are also described in our video demo [here](https://youtu.be/J8LbuUVP_Do?t=271) --- # Launch a sandbox Source: https://docs.sandboxes.cloud/docs/launch-a-sandbox.md In this page, we describe how to start a new sandbox with your code and configuration. We will use a simple version of a demo app to illustrate the process. ## Start a sandbox from home page on Web Console From the home page in Crafting Web Console, we can start a new sandbox right there. Clicking the card, there are some options shown in the dialog. If your organization already defined templates for developers to choose from, it will default to the option of creating a sandbox with a pre-defined template. You can select a template you need and create a sandbox. You can click the `Create` button on the bottom to launch a sandbox with the default config, or click `Customize` to get to customization page, as shown below. You can also choose `Create a Workspace` if you don't want to use any pre-defined template. The flow is covered in [Start a Workspace](https://docs.sandboxes.cloud/docs/start-a-workspace). ### Customize checkout You can customize with version of the code you want to checkout and run in your sandbox from the customization page. It can be different from the default branch defined in the template to fit different purposes of the sandbox, e.g., * For coding a new feature, you can create the sandbox from master, and then do `git checkout -b` to create your own branch in the sandbox * For previewing, you can select a branch that is recently pushed to the repo (`Recent branch`) or a branch that is corresponding to an open Pull Request (`Open PullRequest`) * Or you can simply specific a custom branch or tag to checkout code from (`Custom`) #### Customize environment variables Another common customization is to override the default environment variables in the sandbox to have your own configuration on top of the template. The buttons highlighted above allows you to customize environment variables on sandbox level (applied to all workspaces) or on individual workspace level. Please see [Environment variables (ENV)](https://docs.sandboxes.cloud/docs/env-management) for details. #### Customize data snapshots At sandbox launch time, you can also customize which dataset to load into the databases in the sandbox. Your team may have default data snapshots defined in the template for loading the default data set into the dev environments. Here you have the ability to change that for your sandbox. #### Launching a sandbox ![Launching a sandbox](https://files.readme.io/3387d2b-image.png) After clicking `Launch` from the customization page (or clicking `Create` directly from the dialog) your new sandbox is launched and the Crafting platform will launch corresponding containers in the sandbox. When it is ready, you can start working on the sandbox (see [Work on a sandbox](https://docs.sandboxes.cloud/docs/work-on-a-sandbox) for details) ### Create a sandbox from other places Additionally, a developer can create sandbox from several other places: * From sandbox list page * From the template page * Via Crafting command line tool (CLI) ```shell $ cs sandbox create ``` From the Crafting command line tool, `cs`, you can create a sandbox with detailed customization by specifying command line parameters. Please see [Command Line Tool](https://docs.sandboxes.cloud/docs/command-line-tool#sandbox) for details. This is also the recommended way for programmatically create a sandbox in your automation workflow. Please see [Git Service Integration for Preview](https://docs.sandboxes.cloud/docs/git-integration) for more description on the use case. * By clicking a URL from browser A developer can also create a sandbox by clicking a URL from their browser, e.g., ```text https://sandboxes.cloud/create?app=frontend&ws_frontend_co_src_version=master&ws_frontend_mode=auto&dep_mysql_snapshot=mysql-snapshot&autolaunch=true ``` This allows a customized link (usually auto-generated by tools) to be posted in Pull Requests or Slack channels for people to create sandbox with specific configuration, which is commonly in the preview use case. Please see [Git Service Integration for Preview](https://docs.sandboxes.cloud/docs/git-integration) for details. --- # Login Source: https://docs.sandboxes.cloud/docs/login.md A user's main identity on Crafting Platform is his/her email address, typically work email address. For Crafting SaaS users, you can login at [https://sandboxes.cloud](https://sandboxes.cloud) for Crafting Self-hosted, you can login with your custom URL that is specific to your site. To get an account, you can ask for an invitation from your organization's administrator. Your administrator can also choose to use domain-based account setup where all users from a certain domain will automatically create an account upon first time login. ## Login with Google Single-Sign-On (SSO) The main login method we provide is Google SSO. If your organization is using [Google Workspace](https://workspace.google.com/) (previously known as G Suite), you can simply login use your work email. Or if you have a trial account with us based on your Gmail address, it should work directly. ## Login with GitHub You can also login with GitHub identity, in which case, your email address needs to be associated with your GitHub account and listed as "Public email". You can adjust see the setting at your GitHub [profile page](https://github.com/settings/profile) as follows. ## Where Login is Needed Other than Accessing Web Console A few other cases you will need to login in to authenticate yourself: * When you access an endpoint for a sandbox that is not made public to the Internet, you will need to authenticate yourself just like accessing the web console. * When you first time use CLI from your local machine, you will be prompt to login and authenticate yourself > 🚧 Password Login > > For security reasons, we currently do not support password-based authentication. Please contact us at [contact@crafting.dev](mailto:contact@crafting.dev) to talk about your specific case. --- # Network configuration and endpoints Source: https://docs.sandboxes.cloud/docs/network-setup.md This page talks about how to let components in the sandbox communicate with each other and how to access services running in the sandbox from outside via `endpoints`, specifically: * [How services communicate with each other within sandbox](#how-services-communicate-with-each-other-within-sandbox) * [Direct access via hostname and port](#direct-access-via-hostname-and-port) * [Hostname Aliases](#hostname-aliases) * [Via in-sandbox port-forwarding (from workspace only)](#via-in-sandbox-port-forwarding-from-workspace-only) * [How to expose services running in the sandbox for external access](#how-to-expose-services-running-in-the-sandbox-for-external-access) * [Setup endpoints](#setup-endpoints) * [From local via SSH tunneling or port-forwarding](#from-local-via-ssh-tunneling-or-port-forwarding) * [Extend DNS Resolver](#extend-dns-resolver) ## How services communicate with each other within sandbox As mentioned before, in each Crafting sandbox, there is an overlay network Crafting sets up for services to communicate with each other. There are multiple ways for one service to reach another within a sandbox. ### Direct access via hostname and port The `workspaces`, `dependencies`, and `containers` can address each other by their `names` as the network `hostname`, and use the `ports` they define in the configuration. For example, in the workspace defined above, other service can reach it by `spring:8080` using HTTP protocol. For all the built-in dependencies, they open the default ports, and details can be found at [https://sandboxes.cloud/dependencies](https://sandboxes.cloud/dependencies) Note that the port defined in `workspaces`, `dependencies`, and `containers` will NOT be directly exposed to Internet for security reasons, so you can't access them directly from your local machine. See [below](#how-to-expose-services-running-in-the-sandbox-for-external-access) for how to access that. #### Hostname Aliases A workload (a `workspace`, `dependency` or `container`) can be assigned extra hostnames as aliases in addition to its name, for example: ```yaml YAML workspaces: - name: work hostnames: - work.local - api-work dependencies: - name: mysql service_type: mysql hostnames: - db containers: - name: logger image: ... hostnames: - log-service - logd ``` In this example, the workspace `work` can also be accessed via hostnames `work.local` or `api-work` etc. Same for dependencies and containers. Note: the same hostname alias can't be assigned to more than one workloads. #### Via in-sandbox port-forwarding (from workspace only) Based on our practice for local development, sometimes we configure our services in dev mode to hit a port on localhost for reaching a dependency, e.g. hitting `localhost:6379` to reach the `redis` local server. Crafting makes it easy to replicate the same practice by allowing workspace port forwarding. For example, as shown above, the this workspace sets up two port forwardings for port 3306 and port 6379, to `mysql` and `redis`, respectively. That way, the service running on the workspace can just hit `localhost:3306`, which will be equivalent as hitting `mysql:3306`. Setting up port forwardings on a workspace is also helpful for hybrid development by providing information on outbound connections. See [Port-forwarding for hybrid development](https://docs.sandboxes.cloud/docs/port-forwarding) for details. ### How to expose services running in the sandbox for external access For security reasons, the port defined in `workspaces`, `dependencies`, and `containers` will NOT be directly exposed to Internet. To access the services running in the sandbox from external, e.g. local laptop, there are following ways. #### Setup endpoints We can define `endpoints` in the sandbox to manage access from external. Each endpoint is corresponding to an external facing URL, which can be addressed from Internet. To add an endpoint, from the editing view of a [Standalone sandbox](https://docs.sandboxes.cloud/docs/standalone-sandbox), click `Add Component` as shown below and choose `endpoints`. An endpoint can map traffic hitting the external URL to internal ports by a set of pre-defined rules. From web console, we can define a direct mapping for a TCP endpoint or a set of routing rules for an HTTP endpoint. In the above examples, the Internet facing URL `tcp://mysql--mysandbox-myorg.sandboxes.run:443` goes to the `mysql` service's default port (3306), and `https://app--mysandbox-myorg.sandboxes.run/` goes to the `frontend` service's `app` port (defined as 3000). For security reasons, all `endpoints` exposed by Crafting sandbox need to have TLS layer, e.g. https for HTTP protocol and tcp+tls for TCP protocol. As shown above, for HTTP endpoints, we can add authentication to make sure only users in the same organization can access the endpoint. For APIs, sometimes it might be unnecessary if the API itself has an app level authentication mechanism already there. An HTTP endpoint can support routing different paths to different backend services, (i.e., `workspaces`, `containers`, `dependencies`). It also supports more advanced routing rules. Please see [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition#endpoints) for details. #### From local via SSH tunneling or port-forwarding There are other special ways to access services in the sandbox, which is using CLI `cs`. Using the same mechanism of how `cs ssh` works. We can set up SSH tunnel from a local machine. In addition, after `cs portforward`, the local port on the local machine, e.g. `localhost:6379` will be mapped to the target port defined in the workspace's `port forwardings`, you can use your debugging tools to easily access the in-sandbox service that way from your local machine. See [Port-forwarding for hybrid development](https://docs.sandboxes.cloud/docs/port-forwarding). We recommend define `endpoints` properly as shown above, which supports generic use cases such as mobile testing, end-to-end testing, and demoing. ### Extend DNS Resolver Specifically, inside a workspace, the DNS resolver can be extended to resolve explicitly specified hosts entries, and/or hook up a second-level DNS resolver. #### Static Hosts Entries The built-in DNS resolver is aware of hosts files from the following locations: * `/etc/hosts` * `/etc/sandbox.d/dns/*.hosts` All these files follow the same format as `/etc/hosts`, and only IPv4 is supported (all IPv6 entries are discarded). #### Chained DNS Resolver To hook up a chained, second-level DNS resolver, add a config file (must have suffix `.conf`) in `/etc/sandbox.d/dns` with content like: ```json {"domains":[".foo.com.", ".foo.org."],"servers":["10.2.3.4:53"]} {"domains":[".bar"],"servers":["10.7.8.9:53"]} ``` There are a few requirements about the content: * Each line must be a single complete JSON document (the content is invalid if a JSON expands multiple lines); * Each domain must have prefixing and trailing dots. Note: it's possible to specify a domain like `"."` for using a second-level resolver to resolve all unresolved names, however it may introduce latency or instability in the case when the second-level resolver fails. --- # Organizational settings Source: https://docs.sandboxes.cloud/docs/org-settings.md Crafting provides additional settings on the organization level to offer more convenience and flexibility to development teams. In this page, we go over some of the settings. The corresponding settings can be found at `Team -> Settings` for org administrators to adjust. * "Default sandbox sharing mode": Whether newly created sandboxes are put in default (shared) mode or private mode. See [Access control in sandbox](https://docs.sandboxes.cloud/docs/access-control) for more information regarding private mode sandbox. * "Domain signin mode": Whether auto-signing in anyone from the authorized domain as an active user or as a disabled user first, requiring an admin to activate the user. See [Account Setup](https://docs.sandboxes.cloud/docs/account-setup) for more information regarding domain signin. * "Default favorite templates": Select a list of templates to as "favorite templates" to displayed more prominently in the system, easier for developers to create sandboxes from. * "Max number of pinned sandbox" and "Max duration of pinned sandbox": Limit how many sandboxes can be pinned and how long the system automatically unpin the sandbox, for controlling resource utilization. See [Suspend and resume](https://docs.sandboxes.cloud/docs/suspend-and-resume) for more information regarding pinned sandbox. * "Retention of suspended sandboxes": Settings to let Crafting platform delete sandboxes that are left over in the suspended state for a long time without being touched. * "Default base image": Set a container image will be used by all workspaces without a specific base image setting or base snapshot. See [Setup workspaces](https://docs.sandboxes.cloud/docs/workspaces-setup) for more information on base snapshot or image. --- # Personalize your sandbox Source: https://docs.sandboxes.cloud/docs/personalize.md This page describes about how to personalize your development environment in Crafting Sandbox using personal snapshot. Crafting allows a team to pre-define standard setups for the default dev environments as [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition) stored in templates. In addition, it also allows every developer to further personalize their workspaces to have their personal touch on top of the default, team-wide templates. You can define `Personal Snapshots`, a snapshot containing personalized configurations and is applied to the home directory (`/home/owner`) for *every workspace* in new sandboxes created by you. ## Create a personal snapshot Similar to `Home Snapshot`, a `Personal Snapshot` can be created via CLI command. To create a personal snapshot, first edit the file `~/.snapshot.personal/includes.txt` and optionally, `~/.snapshot.personal/excludes.txt`. The `includes.txt` should specify the files patterns to be included in the snapshot, and the `excludes.txt` should specify patterns for the files to be excluded. Once the `includes.txt` file is ready, use the command to create the snapshot. ```shell $ cs snapshot create --personal NAME --set-personal-default ``` ### Set default snapshot As the name suggests, a `Personal Snapshot` is personal to one user (the owner). It's visible and accessible to the owner only. The name shares a different namespace from other snapshots in the org. Hence, it can have the same name as other kinds of snapshots in the same organization. The sub-command `set-default` can be used to explicitly set a personal snapshot as the `Default Personal Snapshot`, which will be applied automatically to all new workspaces created later. If there are no existing personal snapshots, the first one will become default automatically. If there is more than one personal snapshot, the default one can be selected in web console, under `Resources->Snapshots` We can also select a personal snapshot via CLI command: ```shell $ cs snapshot personal get-default $ cs snapshot personal set-default NAME ``` To unset the `Default Personal Snapshot` so that no personal snapshots will be applied to new workspaces: ```shell $ cs snapshot personal set-default NONE ``` ### What to put in personal snapshot and avoid conflicts Since a `Personal Snapshot` is applied the same way as a [Home Snapshot](https://docs.sandboxes.cloud/docs/workspaces-setup#home-snapshots) that directly extracts the snapshot into the home folder. it's recommended that there's no file overlapping between a personal snapshot and a home snapshot since a home snapshot can be updated later with updated content in an overlapped file which will be reverted once the personal snapshot is applied, and it may contain an old version of the file compared to the latest home snapshot.\ To avoid that, for example, adding personal environment variables, the best way is to add the following in `~/.bashrc`: ```shell if [[ -f ~/.bashrc.me ]]; then . ~/.bashrc.me fi ``` And add `~/.bashrc` in a shared home snapshot, and `~/.bashrc.me` in personal snapshot. File `~/.snapshot.personal/includes.txt` contains ```text .bashrc.me ``` --- # Port-forwarding for hybrid development Source: https://docs.sandboxes.cloud/docs/port-forwarding.md In this page, we describe how to use Crafting's port-forwarding feature for hybrid development, which combines the power of cloud with the familiarity of the development environments on local machine. When a developer wants to make code change in a service, e.g. `Service B`, ideally it's best to be able to run it with its upstream and downstream services, e.g. `Service A`, and `Service C`, so that the code change can be tested in an end-to-end product flow. However, due to setup complexity or lack of local resources, running the entire product on local machine is infeasible, making development and testing difficult. Crafting allows the developer to only run the target service, i.e., `Service B` on the local machine, but it would be virtually plugged in the sandbox, having an end-to-end context for testing. It achieves this by **two-way port-forwarding**. ```shell $ cs portforward ``` With a single command `cs portforward`, Crafting connects your local machine with a sandbox that runs on the cloud with multiple services end-to-end, and use the service running on your local machine to "replace" the selected service. In the example above, the `Service B` is selected as target, then all the incoming traffic to the ports defined for `Service B` on the sandbox will be forward to the local machine, hitting the `Local Service B` that runs there. At the same time, traffic hitting the local ports from `Local Service B` are forwarded to corresponding services running in the sandbox on cloud, e.g. `Service A` and `Service C`. That way, an end-to-end product flow, hitting `Service A`, `Service B`, and `Service C` in this sequence, will actually hit `Service A`(on cloud), `Local Service B`, and `Service C` (on cloud), allowing the developer to test the `Local Service B` easily. Key advantages of using Crafting port forwarding * Developers have near-zero workflow change from their local machine dev experience, same machine, same IDE, same local tools, same workflow. * The IDE doesn't need to have remote development capability. * The developer still manages local code branch and commit code from local. * Heavy dependencies and services are off-load to cloud and no longer consumes local resources. ## Integration testing example Here we use an example on integration testing to illustrate port-forwarding. The demo video can be found [here](https://youtu.be/AodElect3Ks?t=225). In this example, we are using the demo app we talked about before, and we are going to replace the API service, by the local version. Just simply run the `cs portforward` and select the sandbox (`pr-21`) and workspace (`api`), the Crafting CLI established forwarding as follows: For the locally running API service, it forwards the 3001 port from the cloud workspace to the localhost 3001, so that the locally running API service just needs to listen to the localhost 3001 to get all the requests forwarded to it. For the outgoing traffic from the local API service, it forwards the local ports 2181, 3306, 8087, and 9092 to the corresponding services running on the Crafting sandbox on cloud, so that the local API service can call database as well as other services on cloud. Then we launch the local API service in RubyMine IDE, which runs the source code locally with our local code change we want to test. We can also add a breakpoint in the IDE just like when we need to debug the local service. Then we hit the endpoint for the sandbox online, to test the flow end-to-end, which triggers a request to the API service. However, this time, with the **port forwarding** turned on, instead of hitting the version running on the cloud sandbox, it hits the locally running API service, triggering the breakpoint. Here we can inspect the value and continue the execution, then the locally running API service will fetch data from the cloud mysql and return to the frontend. ### Setup notes #### Define ports and port-forwarding First, as `cs portforward` is a local machine-based command, you need to have the Crafting CLI downloaded on to your local machine. The command `cs port-forward` will reply on the `Sandbox Definition` and local additional flags to decide which incoming and outgoing forwarding to be established. In each workspace, * `ports` are used to define incoming traffic, which are for incoming forwarding. * `port_forward_rules` are used to define outgoing traffic, which are for outgoing forwarding. For example, with the following `Sandbox Definition`: ```yaml workspaces: - name: frontend ports: - name: http port: 3000 protocol: HTTP/TCP port_forward_rules: - local: '8080' remote: target: backend port: api - local: '6379' remote: target: cache port: redis - name: backend ports: - name: api port: 8080 protocol: HTTP/TCP dependencies: - name: cache service_type: redis ``` Running the command `cs port-forward` targeting `frontend` workspace will establish 1 incoming forward and 2 outgoing forward: ```text $ cs port-forward -W demo/frontend TYPE FROM TO STATE #CONN DETAILS Reverse 3000 localhost:3000 OK Forward localhost:8080 backend:8080 OK Forward localhost:6379 redis:6379 OK ``` #### Make sure config in source code points to the local target In a Crafting sandbox, there are two ways for a service (e.g., service `api`) to talk to another one (e.g., service `backend`): * The direct way is to let the config in `api` directly address to the host name `backend`, just like in docker-compose or Kubernetes namespace. e.g., `http://backend:8080` is hitting the `backend` service's port 8080. * The port forwarding way is set up port forwarding rule on `api` service, to point its local port (e.g. 8080) to `backend` service's port 8080. With that setup, the `api` service can address `http://localhost:8080` to hit `backend` service's port 8080. After setting up the port forwarding, the direct way to address hostname of `backend` would continue to work. For hybrid development, since local machine is not part of the sandbox's overlay network, the port-forwarding setup is necessary for the local service to have outbound network connections to services running on cloud. It is because the `cs portforward` command will use that configuration to establish forwarding rules between local machine and sandbox on cloud. In this use case, we recommend also using the port forwarding way for services in the sandbox to talk to each other so that the config can be kept same no matter the service process is running on local machine or in cloud sandbox. #### Make sure the local ports are available for outbound forwarding One error that is often encountered is the port conflict when starting the port forwarding. For example, a developer may have their local redis running and listening to port 6379 on the localhost for the local machine. Then when starting a port forwarding session, the attempt to setup forwarding from the same local port 6379 to cloud will fail because the port is already taken by local redis. In this case, we recommend to turn off the local service to free the port. Alternatively, you can skip the outbound forwarding by adding the option `-F, --skip-forward-rules` to the `cs portforward` command. --- # Quickstart Guide Source: https://docs.sandboxes.cloud/docs/quick-start.md In this section, we show how you can get started with Crafting and explore its use cases. ## Install Crafting Express to Your System The quickest way to try out Crafting is to install the Express version of it to your system. You will need a Kubernetes cluster, which can be your existing Kubernetes cluster or newly allocated from your cloud service provider. It's ideal for a quick trial with on your own system with a small number of users. Please see [this page](https://docs.sandboxes.cloud/docs/crafting-express) for details. ## Create an Account on Crafting SaaS Platform If you want to try the Crafting platform completely hosted by us, without requiring any setup on your system, you can request a trial account on our Crafting SaaS platform. You can try Crafting there with any open-source project or your own source code. It is scalable and can be converted into full accounts for long term use. Please see [this page](https://docs.sandboxes.cloud/docs/crafting-saas) for details. ## Use Managed Self-hosting to Host Crafting in Your Cloud If you want to host Crafting platform on your cloud for scalable long term use, and don't want to spend time setting it up. You can contact us for managed self-hosting and we can set up Crafting platform on your cloud and manage it for you. Please see [this page](https://docs.sandboxes.cloud/docs/crafting-self-hosted) for details. ## List of Use Cases Crafting Sandbox is a comprehensive tool that supports many use cases. Following table lists the most common use cases, please take a look at the corresponding section for the specific scenario and guide. | Use Case | Description | | :---------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------- | | [Code Change (PR) Preview](https://docs.sandboxes.cloud/docs/use-case-preview) | Have a sandbox for each code change or Pull Request, allow whole team to preview the change before hitting production | | [Kubernetes Development and Testing](https://docs.sandboxes.cloud/docs/use-case-kubernetes) | Code and test with on-demand k8s namespaces in your k8s cluster, see your changes live instantly in k8s with traffic interception | | [Maintainable Dev Environments](https://docs.sandboxes.cloud/docs/use-case-standardization) | Onboard new engineers quickly with replicable dev environments, keep everyone's dev env up-to-date without risk of breaking them | | [Overcome Local Machine Slowness](https://docs.sandboxes.cloud/docs/use-case-compute-power) | Leverage unlimited computing powers on cloud with maximum savings by sharing and auto-suspension for dev environments | | [Scale beyond Docker Compose](https://docs.sandboxes.cloud/docs/use-case-compose) | Scale your multi-service dev environments beyond docker compose and leverage unlimited containers on cloud | | [Team Collaboration, Local or Remote](https://docs.sandboxes.cloud/docs/use-case-collaboration) | Collaborate with your teammates from anywhere with dev environments that are shared and accessible online | --- # Remote Desktop in Workspace Source: https://docs.sandboxes.cloud/docs/remote-desktop-in-workspace.md A workspace in a sandbox is a Linux environment which also supports running X-Window based desktop applications. Crafting provides the easy experience setting up the remote desktop in a workspace with a single command. ## Requirements Crafting Remote Desktop support requires the workspace running a Ubuntu/Debian based system. To access the desktop remotely, a Remote Desktop Client support Microsoft RDP protocol must be installed on your local desktop machine. Some well known clients are: * Mac OS: Microsoft Remote Desktop Client * Linux: Remmina ### Setup Guide Run the following command **inside** a sandbox workspace: ```shell cs addon install remote-desktop ``` It may interactively ask for setting up keyboard configuration during the process. This setup can be saved by [base snapshot](workspaces-setup#persist-packages-and-libraries-setup-with-snapshots) so that it doesn't have to be done manually every time for a new sandbox. ### Access Remote Desktop From your local machine, make sure you have installed the CLI (see the [Download Page](https://sandboxes.cloud/download)). Then run the following command: ```shell cs remote-desktop # or "cs rd" for short ``` After selecting a workspace, it will display a local-forwarded address that a RDP client can connect to, like: ```text rdp://127.0.0.1:3389 ``` The CLI will attempt to launch well know RDP client (Microsoft Remote Desktop Client for Mac OS, and Remmina for Linux) with correct configuration, however if the attempt failed, or you want specific configuration, please read the following sections. Note: regarding Microsoft Remote Desktop Client for Mac OS, the `dynamic resolution` (resizing the desktop when the client window is resized) can't be supported during command line launch. To enable `dynamic resolution`, a profile must be created from the UI and explicitly enable that feature. #### Microsoft Remote Desktop Client for Mac OS According to the URL printed by the CLI, from its UI, add a PC with "PC name" of "localhost" (or "localhost:PORT" if PORT is not 3389) and save. Double click to connect. The client will always prompt for a username/password (this can be suppressed by specifying an account when saving the configuration), just enter anything as the connection is already authenticated and secured by the "cs rdp" command. It may warn about certificate, dismiss it. When editing the configuration, more parameters can be configured in the "Display" tab, like color-depth. If the performance is not ideal, try to lower color-depth. #### Remmina On most desktop Ubuntu, Remmina is pre-installed and registered to handle "rdp://" URL, so the command "cs rdp" will be able to launch Remmina automatically (or other software registered with "rdp://" URL schema). However, the default configuration of Remmina may not provide the best experience: * Scale mode not enabled by default. Click the button "Toggle Scale Mode" on the left bar to enable it, so the desktop will auto resize when the client window is resized; * The display quality may not be ideal. Click the button "Settings" on the left bar to adjust the quality and color-depth accordingly. ### Use of Remote Desktop Enabling Remote Desktop turns a workspace as a desktop environment on the cloud. From there, developers can: * Run full-featured desktop IDEs, even they don't support remote development capabilities as provided by VSCode or JetBrains Gateway; * Run desktop browser and desktop apps local to your development environment, without concerning about network latency. --- # Source: https://docs.sandboxes.cloud/docs/repo-manifest.md ## Repo Manifest ### Overview `Repo Manifest` defines how Crafting system automates the setup for a git repository in the workspace after checking out the code. It is in YAML format including the following information: * [Hooks](#hooks): define what to do after checkout, and how to build; * [Daemons](#daemons): define what should run inside the workspace; * [Jobs](#jobs): define what to run based on schedules. An example: ```yaml env: # Environment variables shared by all hooks, daemons and jobs. - DB_ROOT_PASSWORD=mysql - RAILS_ENV=development hooks: post-checkout: cmd: | bundle install bundle exec rake db:migrate build: cmd: | bundle exec rubocop daemons: rails: # Name of the process is "rails", which is to launch a rails server on port 3001. run: cmd: bundle exec rails s -p 3001 -b 0.0.0.0 jobs: housekeep: # Name of job is "housekeep", which performs house keeping every 10 minutes. run: cmd: ./housekeep schedule: "*/10 * * * *" ``` ### Location The default manifest file in a source repository is `.sandbox/manifest.yaml`, unless override is specified in [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition#checkouts). ### Shared Environment The top-level section `env` defines environment variables shared by all hooks, daemons and jobs. Each item must be defined in the form of `KEY=VALUE` (no spaces around `=`) where environment extraction `$NAME` and `${NAME}` are supported in `VALUE`. ### Hooks Hooks are invoked at specific stage during workspace creation or automatic branch follow. The supported hooks are:
Hook When to Run
post-checkout Optional, after any new code change pulled from the remote repository.
build After new code change is pulled from the remote repository, and after post-checkout.\ If unspecified, no build will be performed.
The value of each hook follows the [Run Schema](#run-schema). All the hooks in the manifest are optional. If unspecified, the script `.sandbox/HOOK-NAME` will be attempted when the hook is supposed to be invoked. If the script doesn't exist, the hook is simply skipped (treated as success). ### Daemons *Daemons* are long-running processes in the background. They are automatically launched (after a successful build if build hook is provided), unless `disable_on_start: true` is specified (see [Disable On Start](#disable-on-start) ). The `run` property follows the [Run Schema](#run-schema). The processes are managed by the workspace and kept running (restarted if failed). And they can be further controlled in the web console, or the CLI commands: * `cs ps` * `cs restart [NAME]` * `cs stop [NAME]` * `cs start [NAME]` * `cs logs` ### Jobs *Jobs* are one-shot processes executed based on a schedule. The property `schedule` defines the schedule using [crontab](https://man7.org/linux/man-pages/man5/crontab.5.html) format. The property `disable_on_start` set to `true` can be used to not schedule the job after workspace start (see [Disable On Start](#disable-on-start) ). The `run` property follows the [Run Schema](#run-schema). The process is not restarted if failed. ### Disable On Start Both [Daemons](#daemons) and [Jobs](#jobs) supports the property `disable_on_start` to not start/schedule after workspace startup. For example: ```yaml daemons: rails: # Name of the process is "rails", which is to launch a rails server on port 3001. run: cmd: bundle exec rails s -p 3001 -b 0.0.0.0 disable_on_start: true jobs: housekeep: # Name of job is "housekeep", which performs house keeping every 10 minutes. run: cmd: ./housekeep schedule: "*/10 * * * *" disable_on_start: true ``` This has the effect to not start the daemon or schedule the job after workspace startup. Later, the daemon and job can be manually started/scheduled from either the UI or CLI. ### Run Schema This schema defines how to run a process: * `cmd`: defines the command line, and will be interpreted by `$SHELL -c`; * `dir`: defines the working directory, default is the checkout directory of the source repository; * `env`: a list of environment variables in the form of `KEY=VALUE` (no spaces around `=`) (overrides [Shared Environment](#shared-environment)) and environment extraction `$NAME` and `${NAME}` are supported. --- # Source: https://docs.sandboxes.cloud/docs/sandbox-definition.md ## Sandbox Definition ### Overview A Sandbox is designed to provide an all-in-one, self-contained development environment (the on cloud potion of it, in contrast to client side, e.g., mobile app, desktop client, etc.). It contains a *definition* which defines what's inside the sandbox and how to create/run a sandbox. It's composed using a structured schema which includes: * [overview](#overview): a markdown template to render the customized information in the sandbox details page; * [env](#env): a list of sandbox-scope (applied to all workloads) environment variables; * [workspaces](#workspaces): a Linux-based development environment with source code checked out, built and automatically launched as a service (often a micro service) to serve business specific functions; * [dependencies](#dependencies): some commonly used services consumed by the `workspaces`, like MySQL, Postgres, Redis etc; * [containers](#containers): a service launched using a container image; * [volumes](#volumes): additional volumes to be attached to [containers](#containers); * [endpoints](#endpoints): a DNS name exposed to the Internet and when accessed, the traffic is routed to a workspace based on the rules (e.g. HTTP routing based on path); * [resources](#resources): a list of resources to be managed with sandbox lifecyle; * [customizations](#customizations): additional customization capabilities for the convenience of using the sandbox. A brief example represented in YAML: ```yaml overview: | This sandbox is an example. env: - APP_NAME=example - INSTANCE_TYPE=t2.micro workspaces: # Specifies all workspaces. - name: frontend # Workspace name which is also used as the hostname in a sandbox. checkouts: # Specifies how to checkout source code. - path: src/frontend # The local path relative to $HOME to checkout source code. repo: github: # Checkout from GitHub (GitHub integration required). org: sample # GitHub org name. repo: frontend # Repository in the org. packages: # Specifies the toolchains to be side-loaded. - name: nodejs version: '14.15.4' ports: # Specifies the ports exposed by this workspace. - name: http port: 3000 protocol: HTTP/TCP base_snapshot: base/frontend # The snapshot to restore the root filesystem. home_snapshot: home/frontend # The snapshot to restore files in home directory. probes: readiness: # Specifies readiness probes. - name: http http_get: port: 3000 path: / - name: backend checkouts: - path: src/backend repo: github: org: sample repo: backend packages: - name: golang version: '1.17.2' ports: - name: api port: 8080 protocol: HTTP/TCP base_snapshot: base/backend home_snapshot: home/backend probes: readiness: # Specifies readiness probes. - name: http http_get: port: 8080 path: / port_forward_rules: # Forward local ports to a workspace/dependency. - local: "6379" # The local port, must be a string. remote: target: redis port: redis wait_for: # The list of workload names to wait for the readiness. - mysql system: daemons: - name: assistant run: cmd: /opt/assistant/bin/assistd lifecycle: on_create: run: cmd: ./lifecycle.sh dir: scripts env: - LC_FUNC=lc_$SANDBOX_LIFECYCLE max_retries: 3 require_build: true timeout: 30m on_suspend: ... # Same schema as on_create. on_resume: ... # Same schema as on_create. on_delete: ... # Same schema as on_create. dependencies: # Specifies all dependencies required in the sandbox. - name: mysql # Dependency name which is also used as the hostname in a sandbox. service_type: mysql # What kind of service the dependency provides. version: '8' # The specific version of the service, optional. properties: database: app - name: redis service_type: redis containers: - name: sqlpad image: sqlpad/sqlpad:latest env: - 'SQLPAD_AUTH_DISABLED=true' - 'SQLPAD_AUTH_DISABLED_DEFAULT_ROLE=admin' - 'SQLPAD_CONNECTIONS__mysql__name=mysql' - 'SQLPAD_CONNECTIONS__mysql__driver=mysql2' - 'SQLPAD_CONNECTIONS__mysql__host=mysql' - 'SQLPAD_CONNECTIONS__mysql__database=app' - 'SQLPAD_CONNECTIONS__mysql__username=root' - 'SQLPAD_DEFAULT_CONNECTION_ID=mysql' volume_mounts: - name: sqlpad path: /var/lib/sqlpad wait_for: - mysql volumes: - name: sqlpad endpoints: # The endpoints exposed to Internet. - name: app # Endpoint name which is used as part of the DNS name. http: # This is an HTTP endpoint. routes: # The HTTP routing rules. - path_prefix: / # Matches all paths. backend: # Route to the specified workspace and port. target: frontend port: http resources: - name: aws brief: Dev Resources on AWS terraform: workspace: dev dir: deploy/tf run: timeout: 600s vars: instance_type: '$INSTANCE_TYPE' customizations: - env: name: INSTANCE_TYPE display_name: EC2 Instance Type choice: options: - t2.micro - t3.medium - t3.large - flavor: name: slim excludes: - sqlpad - aws ``` ## Sections ### Template Overview This defines an optional markdown template for rendering an informational section in the sandbox details page. The template follows the [guide](https://handlebarsjs.com/guide/) for the syntax, with the following predefined variables: | Variable Name | Description | Org | Sandbox | | :---------------------------- | :-------------------------------------------------------------------------------------------------------------------- | :-------- | :-------- | | org.name | The name of current org. | Supported | Supported | | user.email | Email of current user. | Supported | Supported | | sandbox.name | Name of the current sandbox, if applicable. | | Supported | | sandbox.createdAt | Sandbox creation time, if applicable. | | Supported | | sandbox.updatedAt | The last updated time of the current sandbox, if applicable. | | Supported | | sandbox.template | The associated template's name of the current sandbox, if applicable. | | Supported | | sandbox.owner | The owner of current sandbox, if applicable. | | Supported | | endpoints.\[endpoint-name].url | The full URL of an endpoint. If there is no endpoint named as *endpoint-name*, the variable is deemed as unknown one. | | Supported | | endpoints.\[endpoint-name].dns | The DNS part of an endpoint. If there is no endpoint named as *endpoint-name*, the variable is deemed as unknown one. | | Supported | | resources.\[name].state.... | Referencing the value of the saved state of a resource. | | Supported | For example, a Template with: ```yaml overview: | # Sandbox Notes - Sandbox name: {{sandbox.name}} - Last updated: {{sandbox.updatedAt}} - Owner: {{sandbox.owner}} - Template: {{sandbox.template}} For unknown variable, we display {{unknown}} ``` will generate the following section in the sandbox details page: ```markdown # Sandbox Notes - Sandbox name: sandbox-name - Last updated: 2022-01-01 - Owner: sandbox-user - Template: example-template For unknown variable, we display ``` ### Env A list of sandbox-scoped [environment variables](https://docs.sandboxes.cloud/docs/environment-variables) which will be applied to all workspaces. ### Workspaces A workspace is a Linux-based development environment which runs services by automatically checking out source code, building and launching. A developer is able to access the workspace using SSH, WebIDE etc remotely and debug the service live. A workspace is defined with the following information: * [checkouts](#checkouts): how to checkout source code; * [ports](#ports): the ports exposed by the workspace; * [snapshots](#snapshots): the snapshots used to restore files; * [probes](#probes): the readiness probes; * [port forwarding](#local-port-forwarding): port-forwarding from the workspace to other workspaces, dependencies or containers; * [env](#workspace-environment-variables): environment variables applied to the current workspace; * [system](#workspace-system): system configurations, like daemons etc; * [wait for](#wait-for): define the runtime dependencies; * [access restriction](#access-restriction): define the workspace *Restricted* mode; * [lifecycle](#lifecycle) : the workspace level lifecycle hooks. #### Checkouts A checkout defines the rule to checkout source code from one repository, with all properties shown below: ```yaml checkouts: # Specifies how to checkout source code. - path: src/frontend # The local path relative to $HOME to checkout source code. repo: # Only one of the following sources can be specified: # Using GitHub integration. github: org: sample # GitHub org name. repo: frontend # Repository in the org. # Or using direct git checkout. # The value is a URI accepted by "git clone" git: git@github.com:sample/frontend # Optional version specification for checkout. version_spec: branch # or tag, or commit hash # Do not checkout submodules recursively. # Default is false. disable_recursive_checkout: true # Limit the checkout history. # Specify this helps significantly speed up checkout when # working with large repositories. history: # The history depth. This value is passed as-is to # git flag --depth. depth: 10 # Checkout history no earlier than the specified time. # This value is passed as-is to git flag --shallow-since. since: '2022-01-01' # Manifest overrides. manifest: overlays: - name: alternate - file: dir/filename.yaml - content: | daemons: frontend: run: cmd: yarn run start-alternate ``` The local path of a working copy and remote are defined by `path` and `repo`. The value of `path` is relative to home directly (`$HOME`). The `repo` property specifies one (and only one) of the supported methods to perform the checkout operation: * `github`: this can only be used when the org completes [GitHub Integrartion](https://docs.sandboxes.cloud/docs/github-integration), and the GitHub organization name and repository name are specified; * `git`: use direct `git clone` to perform checkout. The value is passed to `git clone`. Based on the URI, credentials may be pre-configured (e.g. using `git@...` requires the managed public key (use CLI `cs info` to display it) of the developer to be registered in the git source control service provider. If `version_spec` is unspecified, the code is checked out from the default branch (`master` or `main` - for GitHub). Otherwise, it can be specified using one of * a branch name * a tag name * a commit hash If the repository contains git submodules, they are automatically checked out unless `disable_recursive_checkout` is set to `true`. The property `manifest` is used as a mechanism to skip the [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest) sits in the code base (file `.sandbox/manifest.yaml`) and use the overlays specified inline which are merged to generate a final manifest. There are 3 ways to define an overlay: * `name`: the value specifies an alternative filename in `.sandbox` folder, so file `.sandbox/$(name).yaml` will be loaded; * `file`: the full path of the file inside the source repo to load, so the file may sits in a folder other than `.sandbox`; * `content`: the inline content of the [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest). The overlays are merged using the following rules: * `env`: environment variables are replaced by the variable name; * `hooks`: the hook definition is replaced completely by hook name; * `daemons` and `jobs`: the daemon/job is replaced completely by the name. ##### Manifest Override Examples Assume in the source code, the file `.sandbox/manifest.yaml` contains: ```yaml env: - SERVER_NAME=dev - BACKEND_URL=http://backend:8080 daemons: server: run: cmd: ./start-server.sh env: - SERVER_KEY=abc ``` 1. Completely replace the manifest In `checkout`, define the following: ```yaml # Manifest overrides. manifest: overlays: - content: | daemons: frontend: run: cmd: yarn start ``` Because `manifest` is specified in `checkout`, the `.sandbox/manifest.yaml` is skipped. And the final result will be that defined in `checkout`. 1. Override environment and the daemon In `checkout`, define the following: ```yaml # Manifest overrides. manifest: overlays: - name: manifest # Load .sandbox/manifest.yaml - content: | env: - BACKEND_HOST=localhost - BACKEND_URL=http://$BACKEND_HOST:8080 daemons: server: run: cmd: ./start-server.sh --backend-as-remote ``` In the example, the first overlay loads the default manifest (it was skipped as manifest is specified. however now it's explicitly loaded), and the next overlay specifies the overrides. According to the rule, `env` are replaced by variable name, and `daemons` are replaced by name, so the result manifest is: ```yaml env: - SERVER_NAME=dev - BACKEND_HOST=localhost - BACKEND_URL=http://$BACKEND_HOST:8080 daemons: server: run: cmd: ./start-server.sh --backend-as-remote ``` Note: environment `SERVER_KEY` no longer exists because `daemon.server` is replaced completely. 1. Use alternative manifest file In `checkout`, define the following: ```yaml # Manifest overrides. manifest: overlays: - name: alternate # Load .sandbox/alternate.yaml - name: patch1 # Load .sandbox/patch1.yaml - file: config/env.yaml # Load config/env.yaml ``` The above example will generate a final manifest by merge `.sandbox/alternate.yaml`, `.sandbox/patch1.yaml` and `config/env.yaml` together using the merge rules. #### Ports The property `ports` defines the exposed ports of the workspace. This is important information that the sandbox system will be aware of the service exposed by the workspace and how to route the traffic: * `name`: a name to reference the port, e.g. referenced in endpoint's HTTP routes; * `port`: the number of the port; * `protocol`: the protocol running on the port, specified in `L7/L4` or `L4` format. The supported `L4` protocols are `TCP` and `UDP`. If `L4` is `TCP`, `L7` can be one of: * `HTTP`: the plain text HTTP protocol; * `HTTPS`: HTTP over TLS; * `GRPC`: the gRPC protocol over HTTP/2; * `H2`: the HTTP/2 protocol; * `H2C`: the plain text HTTP/2 protocol. Although `protocol` is optional (default is `TCP`), it's highly recommended specifying it explicitly. Some features require a specific value of `protocol`, e.g. `HTTP/TCP`. #### Snapshots [Snapshots](https://docs.sandboxes.cloud/docs/snapshots) can be restored during workspace creation in two tiers, all optional: * `base_snapshot`: when specified, the snapshot is used to restore the workspace's root filesystem, excluding `/home` and some other temporarily folders (e.g. `/tmp`); A custom container image from a *public* container registry can be used with prefix `oci://`, e.g. * `oci://gcr.io/example/path/image:tag` (pulled from gcr.io) * `oci://example/image:tag` (pulled from docker hub) There are requirements about building a custom container image to be used as a base snapshot. Please read [Custom Container Image as Base Snapshot](#custom-container-image) below for more details. * `home_snapshot`: when specified, the files from that snapshot are extracted to the owner's home directory (`$HOME`, it's `/home/owner` in most cases). Snapshots are only restored during workspace creation time. Future changes of the snapshots after a sandbox is created will not be applied to workspaces. ##### Custom Container Image A custom container image can be used as a base snapshot, if the image is built with: * bash * git * rsync * jq * iptables (if docker daemon will run inside the workspace) * sudo with password-less config * UID/GID of 1000/1000 is not used Here is an example of a minimal Dockerfile to build a custom container image: ```dockerfile FROM ubuntu:22.04 RUN apt-get update && \ DEBIAN_FRONTEND=noninteractive apt-get install -y tzdata locales locales-all sudo git rsync iptables && \ update-locale LANG=en_US.UTF-8 && \ sed -i -r 's/^(%sudo\s).+$/\1ALL=(ALL) NOPASSWD:ALL/g' /etc/sudoers ``` ##### Home Skeleton in Base Snapshot During the initial setup of a newly created workspace, following standard, the home directory is created using the skeleton from `/etc/skel`. By leveraging the skeleton folder, the base snapshot may contain the content for home directory and thus reduce the need of a home snapshot. The sandbox system will look up the skeleton from the following folders and use the first one it found: * `/etc/skel.sandbox` * `/etc/skel` If not found, an empty home folder is created. ##### Startup Scripts After the home directory is set up, the sandbox system looks up the following scripts and execute in order (when available) every time a workspace starts up: * `/etc/sandbox.d/setup` * `~/.sandbox/setup` #### Probes Probes define extra mechanisms to determine whether the services in the workspace is ready or not, with the capability to leverage business specific logic. A probe definition requires exact one of 3 supported methods: * `command`: a full command line (interpreted by `$SHELL -c`) run from `/` as `root` during every sampling cycle, and the success of the command (exit code 0) indicates a positive result of the probe; * `tcp_port`: a numeric port (not necessary to be a port from the [`ports`](#ports) section) that a TCP connection will be attempted during every sampling cycle, and the success of connect indicates a positive result of the probe; * `http_get`: `port` specifies a numeric port (not necessary to be a port from the [`ports`](#ports) section) and `path` specifies an HTTP path in the request so that an HTTP GET request will be issued during every sampling cycle. The HTTP status code 2xx indicates a positive result of the probe. Additional properties are available to alter the parameters for sampling: * `interval`: duration in the formation of `SSSs` where `SSS` is in seconds and suffixed by character `s` representing the unit. It specifies the interval between two sampling cycles in seconds; * `positive_threshold`: number of consecutive positive results to turn the current state to positive; * `negative_threshold`: number of consecutive negative results to turn the current state to negative; * `initial_delay`: duration in the formation of `SSSs` where `SSS` is in seconds and suffixed by character `s` representing the unit. It specifies the duration the probe will not start to evaluate since the creation of the workspace; * `initial_negative_threshold`: number of consecutive negative results to yield a negative state during initialization (specifying 0 here will use a default value which may not be 0). Some examples of probe definitions: ```yaml probes: readiness: # Specifies readiness probes. - name: http http_get: port: 8080 path: / - name: ok command: '/usr/bin/status' interval: 60s # Run every minute. - name: port tcp_port: port: 8000 initial_delay: 300s positive_threshold: 1 # as soon as the connection can be established, signal positive. negative_threshold: 3 # signal negative only after 3 consecutive failures. ``` Custom probes can also be used for activity detection which indicates if there's any user activities going on so the sandbox should not be auto-suspended: ```yaml Activity Probe Example probes: activity: - name: custom-user-activity command: 'custom-user-activity-detect.sh' ``` Note: the exit code `0` is used to indicate on-going activities, while non-zero exit code indicates there's no on-going activities. Also the built-in activity detectors can be explicitly disabled or enabled. Given the following example: ```yaml Built-in Activity Probes probes: activity: - name: custom-user-activity command: 'custom-user-activity-detect.sh' activity_detection: disable_builtin_probes: - ANY enable_builtin_probes: - ENDPOINT ``` It will disable all (given the special name `ANY`) builtin probes and then enable only `ENDPOINT` probe. The available built-in probes are: | NAME | DESCRIPTION | | :----------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | SSH | Established SSH connections will be counted as user activity. This includes all remote IDE connections over SSH. | | PORT\_FORWARD | Any running `cs portforward` on the client side will be counted as user activity. | | EXEC | Any running `cs exec` sessions will be counted as user activity. | | CLIENT | Any background `cs` sessions (e.g. auto port-forward) launched by other `cs` commands like `cs vscode`, `cs ssh`, `cs jetbrains` etc will be counted as user activity. | | WEB\_TERMINAL | Any opened web terminal sessions will be counted as user activity. | | WEB\_IDE | Any connected web IDE sessions will be counted as user activity. | | RDP | Any connected Remote Desktop sessions, including Web based and client based, will be counted as user activity. | | JETBRAINS | Any connected Jetbrains IDE sessions to the remote dev server launched by `cs jetbrains remote-dev-server run ...` will be counted as user activity. Note: the session connected directly to the Jetbrains remote dev server processes are counted as `SSH`. | | ENDPOINT | Any established connections to the sandbox endpoints are counted as user activity. | There's a special name `ANY` references all the built-in probes which can be used in either `disable_builtin_probes` or `enable_builtin_probes`. #### Local Port Forwarding Example: ```yaml workspaces: - name: work port_forward_rules: - local: '6379' remote: target: redis port: redis - local: '/run/backend.sock' remote: target: backend port: api - name: backend ports: - name: api port: 8080 protocol: HTTP/TCP dependencies: - name: redis service_type: redis ``` The property `port_forward_rules` maps a port (or a unix domain socket) on `localhost` to an exposed port on a workspace or a dependency in the same sandbox. This feature is designed for 2 purposes: * Provide optional information about the dependencies of the current workspace; * Minimizing the change of code expecting a local-only environment (with dependencies configured on localhost). However, a more cloud-native approach is using [service linking](https://docs.sandboxes.cloud/docs/port-forwarding#service-linking) and avoid accessing `localhost` with ports defined in `port_forward_rules`. When using [service linking](https://docs.sandboxes.cloud/docs/port-forwarding#service-linking), a dependency can be resolved using environment variables: * `_SERVICE_HOST` * `_SERVICE_PORT` These environment variables are available in every workspace. In the `backend` workspace example above, the `port_forward_rules` can be avoided if the source code accesses `redis` using `$REDIS_SERVICE_HOST:$REDIS_SERVICE_PORT`. In the extent of using `port_forward_rules`, the `local` property can be defined in one of the following forms: * `PORT`: the port number listening ONLY on `localhost`; * `+PORT`: the port number listening ONLY on the primary network interface (e.g. `eth0`); * `*PORT`: the port number listening on ALL network interfaces; * `/PATH`: the absolute path of a unix domain socket. The `remote` property specifies the destination. `target` is the name of the destination service (name of a workspace, dependency or a container), and `port` is the name of the port exposed by the destination service. If it's a dependency, the port name can be found by `cs depsvc list` or from [https://sandboxes.cloud/dependencies](https://sandboxes.cloud/dependencies). #### Workspace Environment Variables A list of [environment variables](https://docs.sandboxes.cloud/docs/environment-variables) applied to the current workspace, based on the built-in and sandbox-scoped environment variables. #### Workspace System Configurations on the system level, like daemons and/or files. ```yaml workspaces: - name: example system: daemons: - name: foo run: cmd: /opt/foo/foo dir: /opt/foo env: - FOO=BAR files: - path: /etc/sandbox.d/setup mode: '0755' overwrite: true content: | #!/bin/bash echo "Setting up workspace" - path: ~/.sandbox/setup mode: '0755' overwrite: true content: | #!/bin/bash echo "Setting up for user" - path: /work/placeholder owner: '1000:1000' template: | SANDBOX={{env "$SANDBOX_NAME"}} - path: ~/.env symlink: /run/sandbox/fs/secrets/shared/dotenv - path: ~/.foo/credentials mode: '0600' secret: name: foo-creds ``` This defines a background process `foo` to be launched when workspace starts. This runs before any checkout/build hooks to provide support as part of the workspace system. The processes defined here will be launched and monitored. It's restarted if it stops. The definition is equivalent to individual YAML files in the `/etc/sandbox.d/daemons` folder. For example, the above daemon can be defined in a file `/etc/sandbox.d/daemons/foo.yaml` (this can be baked into a base snapshot): ```yaml name: foo run: cmd: /opt/foo/foo dir: /opt/foo env: - FOO=BAR ``` The `files` section defines the injections to the workspace file system. The `path` specifies the absolution path (must be an absolute path, or starts with `~/` to indicate inside the home directory) in the workspace. When the path starts with `~/`, it's inside the home directory, and the default ownership will be `owner:owner` (or `1000:1000`) rather than `root`. The ownership can always be specified using `owner` (the value must be `UID:GID`). The content of the file can be exact one of the following: * `content`: a plain text file; * `template`: the content is rendered using the specified Go template. There are special functions to be used for: * `{{ env "STRING" }}` perform env expansion on `STRING`. For example, the `STRING` can be something like `The sandbox $SANDBOX_NAME is owned by $SANDBOX_OWNER_EMAIL` which contains multiple env expansions. Keep in mind, the `STRING` is not the env name, use `$` to expand an env in the `STRING`; * `{{ secret "NAME" }}` to extract the content of the specified shared secret `NAME`. * `symlink`: `path` specifies a symbolic link, and the target is specified as the value here; * `secret`: the content is from the shared secret. The `mode` can be used to specify the permission of the file/directory. Please quote the value as a string, otherwise YAML will interpret it incorrectly. If `mode` is unspecified, the system will use `0755` for directories and `0644` for files. The `overwrite` flag specifies the content of the file must match what's being specified exactly in the template. If the file/symlink exists with different content, it will be overwritten. If this flag is `false` (or unspecified), the existing file/symlink will not be touched. #### Wait For A list of workload names (also including resource names) to be specified that the workspace doesn't start any daemons before those are ready (depending on the readiness probes). This introduce dependencies between the workloads and resources, cyclic dependencies are not allowed. #### Access Restriction Specify the workspace should run in the [Restricted Mode](https://docs.sandboxes.cloud/docs/cloud-resources-setup#restrict-access-to-workspaces-and-secrets) that only org-admin is able to access it (over SSH, Web Terminal, WebIDE, Remote Desktop etc), and secrets with access restriction set to *Admin Only* will be mounted. ```yaml workspaces: - name: example ... restriction: life_time: STARTUP ``` The `life_time` can be one of the values: * `STARTUP`: the workspace is created in *Restricted* mode, and can exit later at any time up-on request by any user who has *Update* permission to the sandbox. Once the workspace exits the *Restricted* mode, it can never get back to *Restricted* mode again, and secrets with access restriction set to *Admin Only* will be unmounted; * `ALWAYS`: the workspace is created in *Restricted* mode and can't exit. The restriction setting is permanent once the workspace is created and won't be changed even the Sandbox Definition is updated. #### Lifecycle Add additional hooks to be executed during special sandbox lifecycle events: * on\_create: the hook will be executed during sandbox creation, after all setup tasks (e.g. checkout, build etc) are completed; * on\_suspend: the hook will be executed before the sandbox is being suspended. Failure of the hook will prevent the sandbox from being suspended; * on\_resume: the hook will be executed after the sandbox is resumed; * on\_delete: the hook will be executed before the sandbox is being deleted. Failure of the hook will prevent the sandbox from being deleted. In the case of lifecycle hook failures, the sandbox/workload will not be able to move the next lifecycle state. In this case, the workspace is still accessible, so the owner is able to troubleshoot. After that, either use the UI or [CLI](https://docs.sandboxes.cloud/docs/command-line-tool#lifecycle-related) to resolve the failure and let the lifecycle transition move on. All lifecycle failures except `on_suspend` still allow sandbox auto-suspension of there's no activities. After resume the failure will be restored and requires resolution, unless `on_resume` failed, in which case, the previous failure will be replaced by `on_resume` failure. The lifecycle hooks are executed in respect to the implicit/explicit dependency relationships between the workloads and resources. For example, workspace A has a `wait_for` including workspace B, so `A.on_create`/`A.on_resume` will be after `B.on_create`/`B.on_resume`, and `A.on_suspend`/`A.on_delete` will be before `B.on_suspend`/`B.on_delete`. Within resources, when `use_workspace` is defined, it's an implicit dependency between the resource and the workspace. The resource's `on_create/on_resume` will be after the workspace's `on_create/on_resume`, and vice versa for `on_suspend/on_delete`. ### Dependencies The section `dependencies` lists the well-known services to be added to a sandbox and consumed by the other workloads. A *dependency* is a service managed by the sandbox system and deployed in sandboxes for development purpose (single instance, non-clustered, no HA, no backup). To get a list of currently supported dependencies, visit the [web console](https://sandboxes.cloud/dependencies) or run CLI `cs dependency-service list`. When defining a dependency, properties `name` and `service_type` are mandatory. The property `name` specifies the name of the dependency, and it's also used as the *hostname* inside a sandbox network to reach to the service. The environment variables for service injection will be generated per workspace using the name. The property `service_type` specifies the actual type of the dependency (checkout from [web console](https://sandboxes.cloud/dependencies) or run CLI `cs dependency-service list`). The property `version` specifies a version explicitly. Otherwise, a default version (defined by the dependency service) will be used. The property `snapshot` optionally specifies the name of a snapshot to restore during the creation of the dependency, if snapshot functionality is supported. The property `properties` optionally defines a key/value map to provide parameters for creation the dependency. The definition of key/value pairs are dependency specific. Here is a list of properties defined by the currently supported dependencies (or find out using CLI `cs dependency-service show SERVICE_TYPE`): {`
Service Type Property Key Description

mysql

root-password

The initial password of root. Default is empty (no password required for root).

username

The regular user to be created.

password

The password for the regular user. It's only used if username is specified.

database

The database to be created and grant access to the regular user (if username is specified).

postgres

username

The regular user to be created.

password

The password for the regular user. It's only used if username is specified.

database

The database to be created and grant access to the regular user (if username is specified).

mongodb

replicaset

The name of replicaset. If specified, the single-instance mongodb server will be configured as a single-instance replicaset.

redis

persistence

If specified, turn on persistence. Use one of the values:

  • appendonly: persist data using append-only file;
  • rdb: persist data using RDB.

save

The save configuration, in the format of SECONDS CHANGES; SECONDS CHANGES .... Example: 900 1; 300 10; 60 10000.

cluster

If the value is yes (or y), true (or t), (note, must be a quoted string when define in YAML), configure the single-instance redis to be a single-instance redis cluster (the single instance covers all partitions).

rabbitmq

default-user

The default username for authentication. If unspecified, guest user can access.

default-pass

The password for default-user if specified.

`}
### Containers The section `containers` defines the workloads created directly from container images. It provides the flexibility that a developer may bring in any services as long as there's a container image (private container registries are not supported yet). A container is defined with the following information: * image: the container image, following the docker image naming convention; * entrypoint, args, cwd: corresponding to [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint), [CMD](https://docs.docker.com/engine/reference/builder/#cmd) and [WORKDIR](https://docs.docker.com/engine/reference/builder/#workdir) in docker image configuration; * [ports](#ports): the ports exposed by the container and the protocol; * [env](#container-environment-variables): environment variables added to the container; * [probes](#probes): the readiness probes; * [volume\_mounts](#volume-mounts): additional volumes mounted into the container; * [run\_as](#run-as): run as a specified user/group; * [snapshot](#container-snapshot): restore snapshot during creation. Different from containers run on other system (docker, Kubernetes etc.), all containers running in a Sandbox has the filesystem persisted. Restarting a container workload will not reset the filesystem. So in most cases, it's not necessary to specify an item in `volume_mounts` unless the volume needs to be shared across multiple containers. If resetting the file system is needed, rebuilding the container will get everything erases: `cs sandbox rebuild -S SANDBOX WORKLOAD`. #### Volume Mounts Additional volumes can be mounted into the container, given: ```yaml containers: - name: sqlpad ... volume_mounts: - name: sqlpad path: /var/lib/sqlpad - name: files path: /etc/config/example.conf sub_path: example.conf ``` The `name` specifies the name of a *volume* (defined under [volumes](#volumes) section). And `path` specifies the path inside the container and it must be an absolute path without `.` or `..` in the middle. The final mountpoint can be a directory or a file based on what's mapped in the original volume. When `sub_path` is being used, it references a path under the original volume. #### Run As By default, the user/group specified in the container image (or root/root when unspecified) is respected. If specified, this will override that in the container image. The value can be either username/groupname or uid/gid. Example: ```yaml # All fields are optional containers: - name: example1 run_as: user: user1 group: group1 - name: example2 run_as: user: user - name: example3 run_as: uid: 1000 - name: example4 run_as: uid: 1000 gid: 900 ``` #### Container Environment Variables This is similar to [Workspace Environment Variables](#workspace-environment-variables). The final environment variables are generated from the sources in the order of: * Environment variables defined in the container image; * App/Sandbox environment variables; * Environment variables defined in the container. #### Container Snapshot Snapshot can be taken from volumes of a running container. This is only supported when there are at least one `volume_mounts` defined in the container backed by a [regular persistent volume](#regular-volume). The `volume_mounts` backed by regular persistent volumes are also used as the schema of the snapshot so it can only be restored to a container with the same set of `volume_mounts`. Similar to dependency snapshots, during snapshot creation/restoring, the container is temporarily stopped. #### Container Wait For Same as workspace, the container doesn't start until the workloads in the `wait_for` list become ready (relying on the readiness probe). ### Volumes The section `volumes` defines the volumes referenced by the entries under [volume\_mounts](#volume-mounts) of [containers](#containers) section. A *volume* can be one of the following types: * [Regular](#regular-volume): a persistent volume shared with one or more containers; * [Content](#content-volume): a read-only volume with predefined content; * [Secret](#secret-volume): a volume containing the content of a [Secret](https://docs.sandboxes.cloud/docs/secrets); * [Workspace Filesystem](#workspace-filesystem-volume): a volume exposing as a shared file system backend by a workspace. #### Regular Volume A volume defined with only a *name* represents a regular, persistent volume: ```yaml volumes: - name: data ``` A regular volume can be shared by one or more containers. When multiple containers reference the same regular volume, during runtime, these containers may be co-located on the same host under-the-hood, and the volume is shared for reading and writing. #### Content Volume A read-only volume with predefined content. For example: ```yaml volumes: - name: json_config content: text: | { "key": "value" } - name: yaml_config content: text: | key: value - name: config content: text: | some text information another line - name: binary_config content: binary: !!binary "SGVsbG8gV29ybGQK" ``` A content volume is mounted as a file inside a container. Either the `text` or `binary` value is used as the content of the file as-is. When the content is updated in the App Definition, the file inside the container will reflect the change immediately after the sandbox is in sync. #### Secret Volume A read-only volume with content from a *shared* secret. ```yaml volumes: - name: cred secret: name: shared-cred ``` A secret volume is mounted as a file inside a container. #### Workspace Filesystem Volume A pseudo volume exposing the root filesystem of a workspace as a shared filesystem. ```yaml volumes: - name: work workload: name: workspace prefix: / ``` The volume is mounted as a remote filesystem inside a container. By default `prefix` is `/` which will expose the full root filesystem. If `prefix` is defined, it will be prepend to every filesystem access request to construct the final path inside the workspace filesystem. Not all paths in the workspace filesystem is exposed. Mountpoints like `/proc`, `/sys`, `/dev/shm` etc are not exposed. ### Endpoints The section `endpoints` specifies how the application in a sandbox can be accessed from the Internet so it can be tested end-to-end, advertised as a demo, etc. Most ports exposed by workspaces and dependencies are private inside the sandbox network. An `endpoint` is used to route traffic from Internet to one of these ports. Only TCP or HTTP endpoints are supported. #### HTTP Endpoint An HTTP endpoint has the capability of routing HTTP requests based on matchers and optionally supports authentication using Single-Sign-On from the sandbox system. For example: ```yaml endpoints: - name: app http: routes: - path_prefix: / backend: target: frontend # workspace name port: http # port name defined in the workspace ``` The property `name` defines the name of the endpoint and it will also be part of the generated DNS name over the Internet. The property `path_prefix` defines a string prefix to literally match the request path. When matched, the request is routed to the destination specified by `backend`. If multiple rules are specified, the one matches longest wins. As most of the sandboxes are for development purpose, it's insecure to expose an endpoint to the Internet without access control. By default, all HTTP endpoints are protected by Single-Sign-On from the sandbox system. Without further configuration, only members in the same organization are allowed to access the protected endpoints. For some cases, like the application already implements authentication (e.g. API-only endpoints), or for demo purpose, the Single-Sign-On protection can be explicitly disabled: ```yaml endpoints: - name: api http: auth_proxy: disabled: true # Explicitly disable SSO protection. routes: - path_prefix: / backend: target: backend # workspace name port: api # port name defined in the workspace ``` For some demo cases, customers not in the organization are invited to try out. An endpoint can be configured with extra policy that the Single-Sign-On protection will allow or reject based on individual identity: ```yaml endpoints: - name: app http: auth_proxy: rules: # Custom access control policy. - regexp: "i.+-c@sample.com" action: REJECT - pattern: "ext@sample.com" action: REJECT - pattern: "*@sample.com" action: ACCEPT routes: - path_prefix: / backend: target: frontend # workspace name port: http # port name defined in the workspace ``` The above example will allow all organization members and selected customer emails to access the endpoint. For certain demo only use cases, organization members can also be excluded by setting `auth_proxy.disable_defaults` to `true`, for example: ```yaml endpoints: - name: app http: auth_proxy: rules: # Custom access control policy. - regexp: "i.+-c@sample.com" action: REJECT - pattern: "ext@sample.com" action: REJECT - pattern: "*@sample.com" action: ACCEPT disable_defaults: true # This will not allow organization members routes: - path_prefix: / backend: target: frontend # workspace name port: http # port name defined in the workspace ``` ##### Default Path This is informational only. When specified, the URL opened by clicking the endpoint from the WebConsole will have this path instead of `/`. ```yaml endpoints: - name: dashboard http: routes: - path_prefix: / backend: target: dashboard port: http path: /dashboard ``` With above example, the endpoint URL on the WebConsole contains path `/dashboard` rather then `/`. ##### Header Injection Custom headers can be injected, for example: ```yaml endpoints: - name: app http: routes: - path_prefix: / backend: target: dev port: http request_headers: X-App-Env: 'sandbox-{{.SandboxID}}' response_headers: X-App-Server: 'sandbox-{{.SandboxID}}' ``` The value can be a Go template with some variables being substituted. | Context | Value | | :-------------------- | :--------------------------------------------------------------------------------------------------------------------------- | | `.Org` | Org name | | `.SandboxID` | Current sandbox ID | | `.SandboxName` | Current sandbox name | | `.EndpointDNSSuffix` | The suffix (without endpoint name) to construct the full DNS name of an Endpoint. For example: `--sandbox-org.sandboxes.run` | | `.EndpointBaseDomain` | The base domain of endpoint DNS, e.g. `sandboxes.run` | | `.EndpointName` | The name of the current endpoint | | `.EndpointDNS` | The DNS of the current endpoint | | `.SysBaseDomain` | The base domain of Crafting system: `sandboxes.cloud` | | `.SysDNSSuffix` | The suffix to construct DNS names with system base domain, e.g. `.sandboxes.cloud` | | `.AppDNSSuffix` | The suffix to construct endpoint DNS names, e.g. `.sandboxes.run` | ##### Passthrough Mode This is a special mode of an HTTP endpoint which provides capability of header-based (e.g. API-key) authentication when regular OAuth is not applicable (e.g. Service-to-service communication). This can be configured as: ```yaml endpoints: - name: api http: routes: - path_prefix: / backend: target: api-server port: http auth_proxy: mode: PASSTHROUGH passthrough: required_headers: - header: X-Api-Key regexp: '^apikey-123456$' ``` #### TCP Endpoint A TCP endpoint forwards TCP connections to the specified backend: ```yaml endpoints: - name: conn tcp: backend: target: backend port: tcp ``` TLS is always required for TCP connections, and the endpoint terminates the TLS (without requiring/verifing client certificates) so the backend talks only in plaintext. #### Internal Endpoint An endpoint can be defined as internal which is assigned an internal DNS names and can be accessed by other sandboxes: ```yaml endpoints: - name: foo type: INTERNAL ... ``` Internal endpoints are assigned a DNS name like `NAME--SANDBOX-ORG.sandboxes.internal`. So the above endpoint in sandbox (assume name `sandbox1`) in org (assume name `org1`) can be accessed from other sandboxes via `foo--sandbox1-org1.sandboxes.internal`. Note, internal endpoints are exposed with TLS. It's impossible to expose an endpoint without TLS. ### Resources A `resource` in a sandbox represents a collection of resources outside of the sandbox system but managed by the lifecycle of the sandbox. It uses the scripts provided by the user to create, suspend, resume and delete the actual resources which are opaque to the sandbox system. A `resource` has 4 lifecycle event hooks, all optional: * `on_create`: the hook is executed during sandbox creation; * `on_delete`: the hook is executed before a sandbox is deleted; * `on_suspend`: the hook is executed before a sandbox is suspended; * `on_resume`: the hook is executed right after the sandbox is resumed. Each hook will use a workspace to execute the script. As a common practice, the workspace checks out the repository containing the scripts and providing the environment for execution. It can be defined as a list of `resources`, for example ```yaml workspaces: - name: dev checkouts: - path: src ... resources: - name: aws brief: Dev Resources on AWS details: | Created [resource]({{state.resource_link}}) handlers: on_create: save_state: true max_retries: 3 timeout: 600s use_workspace: dev name: dev require_build: true run: dir: src cmd: ./scripts/provision.sh artifacts: - terraform on_suspend: use_workspace: dev name: dev run: dir: src cmd: ./scripts/suspend.sh on_resume: save_state: true use_workspace: dev name: dev run: dir: src cmd: ./scripts/resume.sh artifacts: - terraform:tf/terrform.tfstate on_delete: use_workspace: dev name: dev run: dir: src cmd: ./scripts/unprovision.sh ``` Each resource is defined with a `name`, optional `brief` (a one sentence summary), optional `details` (a markdown template), and a list of `handlers`. Each handler specifies the workspace (via `use_workspace`) and how to run the script in the workspace (using the [Run Schema](https://docs.sandboxes.cloud/docs/repo-manifest#run-schema)). The `dir` property must specify a relative path to the current home directory (not the checkout path). The `save_state` flag (mostly used in `on_create` and `on_resume`) indicates the output (STDOUT only) of the script is a JSON and should be persisted as the state of the resource. The values in the JSON can be used to render the markdown defined in `details`, like in the above example, with output of ```json {"resource_link": "https://svc.awsamazon.com/something/id"} ``` Will be used in `details` to render the final markdown, like: ```markdown Created [resource](https://svc.awsamazon.com/something/id) ``` With `save_state: true`, the output can also be accessed from the workspace file system, located at `/run/sandbox/fs/resources/NAME/state`. In the above example, `NAME` is `aws`. Note: the most recently executed script will overwrite the state if the corresponding `save_state` is `true`. In the above example, the output of `on_resume` script will overwrite that generated by `on_create`. Property `max_retries` specifies the maximum attempts of retries if the script fails. Specifically for `on_suspend` and `on_delete`, failure of the script will prevent the sandbox being suspended or deleted, so the user is able to get into a workspace and debug. A sandbox can be manually, forcibly deleted by ignoring the `on_delete` hook of resources. Property `timeout` specifies the total time allowed for the script, including all retries. While using a workspace, the script is executed after checkout completes (including the post-checkout hook defined in the [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest). With `require_build: true`, the script will be executed after a successful build instead of checkout completion. The list of `artifacts` provides hints about additional information generated by the script. For example, the value `terraform` indicates the script involved Terraform and generated `terraform.tfstate` in the same directory, so the sandbox system will attempt to visualize the Terraform states in the WebConsole. Alternatively `terraform:dir/file` specifies an alternative path related to the working directory for the Terraform state file. #### Terraform Support If Terraform is used as the only tool for provisioning the resources, a simpler format can be used: ```yaml env: - AWS_REGION=us-west1 workspaces: - name: dev checkouts: - path: src ... resources: - name: aws brief: Dev Resources on AWS details: | EC2 instance id: {{state.instance_id}} terraform: save_state: true workspace: dev # Same as above. require_build: false # The directory (relative to home dir) containing the main terraform module. dir: deploy/tf run: max_retries: 3 timeout: 600s # command is optional, only used if the executable is not "terraform". command: tf # additional command line arguments. args: - 'arg passed to terraform directly, can use ${ENV_VAR}' # additional env variables. env: - CURRENT_REGION='${AWS_REGION}' vars: instance_type: 't2.micro' region: '${AWS_REGION}' # If specified, the value of the output will be used as the output of this # lifecycle hook. Otherwise, the full terraform output in JSON is used: # terraform output -json output: instance ``` The system knows how to run `terraform`. With the above example, the system will run `terraform apply` for `on_create` hook and `terraform destroy` for `on_delete` hook, and uses the full terraform output in JSON (unless `output` is specified) as the state. The hooks for `on_suspend` and `on_resume` are undefined by default. To explicitly enable `on_suspend` and `on_resume` hooks, add `on_suspend` explicitly and it will also enable `on_resume` to use exactly the same config as `on_create`: ```yaml workspaces: - name: dev checkouts: - path: src ... resources: - name: aws brief: Dev Resources on AWS terraform: workspace: dev dir: deploy/tf run: timeout: 600s vars: instance_type: 't2.micro' on_suspend: vars: instance_count: '0' ``` The above example will run `terraform apply` with special variable `instance_count=0` during sandbox suspension, and `terraform apply` same as `on_create`. By default, `on_delete` uses the same configuration as `on_create`, but runs `terraform destroy`. If special configuration is needed for `terraform destroy`, define `on_delete` explicitly: ```yaml workspaces: - name: dev checkouts: - path: src ... resources: - name: aws brief: Dev Resources on AWS terraform: workspace: dev dir: deploy/tf run: timeout: 600s vars: instance_type: 't2.micro' on_delete: vars: delete_var: special ``` More examples can be found in this [repository](https://github.com/crafting-demo/solutions). ### Customizations The `customizations` section in the definition provides additional information for special features and extensibility. #### UI Widget for Env Sandbox-level environment variables can be customized to show additional UI widgets on the sandbox creation page in the WebConsole, so a user can simply select from a set of predefined values for a specific environment variable without carefully typing in which is error-prune. ```yaml env: - INSTANCE_TYPE=t2.micro - APP_NAME= ... customizations: - env: name: INSTANCE_TYPE display_name: EC2 Instance Type description: The instance type for the additional EC2 VM choice: default: t2.micro options: - t2.micro - t3.medium - t3.large - env: name: APP_NAME display_name: The name of the app validators: - regexp: '^[a-z][a-z0-9-]*[a-z0-9]$' ``` Property `name` must match the environment variable defined in `env`. Property `display_name` must be specified in order to show the widget in the WebConsole. ##### Edit Box Without additional config, an edit box is shown on the sandbox creation page for the environment variable. Additional `validators` can be specified for the value entered by the user. ##### Selection With `choice`, a dropdown selector is shown on the sandbox creation page. The default value is the first item in the `options` list unless explicitly specified. ##### Editable Selection With `choice` and `editable: true`, for example: ```yaml env: - APP_TYPE=simple ... customizations: - env: name: APP_TYPE display_name: App type description: The type of the app, or enter your own choice: editable: true options: - simple - multiple - extra validators: - regexp: '^[a-z][a-z0-9-]*[a-z0-9]$' ``` It shows an editable dropdown. #### Sandbox Flavor A sandbox flavor is a named preset which defines the information used to create a new sandbox. With sandbox flavors defined in `customizations` section, the user can simply select one during sandbox creation instead of specifying every detail. Here's an example: ```yaml customizations: - flavor: name: Standard # The flavor name default: true # If true, this flavor is selected by default on sandbox creation env: - FOO=BAR # Environment variables appended to the sandbox-scope env list - FOO1=${FOO}1 # Expansion is supported workspaces: # Configure specific workspaces dev: # This must be the name of the workspace auto: true # Put the workspace in AUTO mode env: - KEY=VALUE # Append to the workspace-scope env list checkouts: # Override specified checkouts in the workspace - path: src # This is used to match the defined checkout in the Template version_spec: develop # Override the version_spec to use develop branch excludes: # Exclude the specified workloads during sandbox creation - testloader - test-db ``` A flavor is able to define configurations (all optional) covering: * Sandbox scoped environment variables * Workspace scoped environment variables * Put workspace in AUTO mode * Checkout version spec * Exclude workloads from the sandbox ##### Environment Variables The top-level `env` specifies the sandbox-scoped environment variables. They are appended to the `env` list defined in the Template. The `env` under `workspaces` appends environment variables to the list of the workspace. Regarding how the final environment variables are generated, please read [environment variables](https://docs.sandboxes.cloud/docs/environment-variables). ##### Workspace Mode Add `auto: true` to a workspace will put the workspace into *AUTO* mode during sandbox creation. Regarding the mode, please read [auto follow](https://docs.sandboxes.cloud/docs/auto-follow) for more information. ##### Checkout Version Spec First define `path: PATH` to match the checkout defined in the workspace from the Template, and then use `version_spec` to specify a new [version spec](#checkouts). ##### Exclude Workloads When a Template defines many workloads, sometimes a sandbox is created for specific tasks which only need a subset of the workloads. In this case, flavor is the most convenient way to define a sub-graph of workloads to be activated in the sandbox. List the names of workloads here to exclude those. The name can be any of the workspaces, dependencies, containers or resources. Note: sandbox creation may fail if the exclusion list breaks the dependency graph (introduced by `wait_for` property of workloads, or `use_workspace` in [resources](#resources). --- # Source: https://docs.sandboxes.cloud/docs/secrets.md ## Secrets for storing dev credentials `Secrets` in the Crafting system are used to store sensitive information which will be encrypted at rest, and with limited access in a sandbox in the cloud. So the services running in the sandbox will have the expected configuration without saving the sensitive information in inappropriate places, like source code. A cloud-native service may need credentials (like tokens, API keys) to talk to external services. A developer may need certain login information for accessing VPN, cloud storage, etc., in their organization. It's best practice to make sure this information is encrypted at rest and has limited access to only authorized users. ## Create secrets A secret can be created by a user in one of the scopes: * Shared in org: the secret is accessible from all the sandboxes in the current organization * Private: the secret is only accessible by the sandboxes owned by the user in the current organization To create a secret, one way is to go to the `Resources -> Secrets` page on Crafting Web Console and click `New Secret` as shown below. In the dialog, we can input the name of secret and the content, as shown below. Note that after creating the secret, you will not be able to view the content from web console for security. To access the secret, please see [below](#access-secrets) Secrets can also be created via CLI, `cs`, using following commands, please see [CLI Command](https://docs.sandboxes.cloud/docs/command-line-tool#secret) for the full reference. ```shell cs secret create NAME # this creates a secret private in org cs secret create --shared NAME # this creates a secret shared in org ``` Secrets are allowed to have the same name if they belong to different scopes. *Note*: secrets are designed for sensitive information. It should be small in size (KB level) and accessed infrequently. ## Access secrets The content of a secret can only be accessed inside a workspace: * Shared in org: `/var/run/sandbox/fs/secrets/shared/NAME` * Private: `/var/run/sandbox/fs/secrets/owner/NAME` For protecting the private secrets, they are only mounted to the file system if the sandbox is in private mode. When the sandbox is changed to shared mode, it will be unmounted and only remounted after it changes back to private mode. For sandbox access control, please see [Access control in sandbox](https://docs.sandboxes.cloud/docs/access-control) for more information. Note: folder `/var/run/sandbox/fs/secrets/owner` also contains shared secrets as it represents all secrets accessible by the sandbox owner. However, only private secrets is revealed if the same name is assigned to both a private secret and a shared secret. The content of a *shared* secret can be placed in the value of an environment variable, for example: ```yaml env: - SOME_API_KEY=prefix${secret:NAME} ``` Where `NAME` is the secret name. Note: only *shared* secrets can be referenced. --- # Source: https://docs.sandboxes.cloud/docs/security.md ## Security ### Data Encryption All communication between the Crafting Sandbox System and the Internet is encrypted via HTTPS and TLS 1.2. The internal communication between the user's workloads and the system is encrypted via mTLS (with certificate verified mutually by both parties). The HTTP services exposed to the Internet from the user's workload is always using HTTPS and TLS 1.2 and by default protected by a Single-Sign-On system.\ All user data is encrypted at rest (via the mechanism provided by the cloud providers - Google Cloud). Highly sensitive information including user-provided secrets, keypairs are encrypted and secured in Vault ([https://vaultproject.io](https://vaultproject.io)) which is sealed using the Key Management System from the cloud provider. ### Vulnerability Scanning Continuous vulnerability scanning is adopted on all the components used internally by the system and followed up by our internal remediation process. ### Availability User information is continuously backed up and can be restored point-in-time. The data generated in user's workloads is backed up using the cloud provider's default configuration, and this doesn't cover the case that users explicitly delete the data. ### Thirdparty Auditing We employ third-party experts to perform penetration tests annually. ### Internal Control All full-time employees are required to complete security training. All people who have access to production system undergo background checks. --- # Source: https://docs.sandboxes.cloud/docs/start-a-workspace.md ## Start a Workspace In this section, we talk about how to start a simple workspace to do some online coding. If you are in a development team, it's likely that your team's admin has already set up the standard development environments into templates. In that case, you should see [Launch a sandbox](https://docs.sandboxes.cloud/docs/launch-a-sandbox) for more information on how you can further customize and launch sandbox. To start a [workspace](https://docs.sandboxes.cloud/docs/concepts-and-architecture#workspace), which is a dev container on cloud where you can code and run your program, you can directly click `Create new Sandbox` from the `Home` page on your web console. Then, under `Create a Workspace`, you can choose the Git repository URL you want to checkout code and the branch. Optionally you can also also choose a special container image that you want to use for your workspace. By default, it will be based on a standard Ubuntu Linux image. Clicking `Create` will create a new sandbox with a single workspace. When the sandbox is ready, the source code is checked out into the sandbox and you can click `Open WebIDE` to get into the Web IDE session to edit code. Like shown below, the Web IDE is based on VS code and has a terminal for executing commands. ![Web IDE with VS Code](https://files.readme.io/021353b-guide-workspace-webide.JPG) You can also quickly modify the sandbox's configuration by clicking `Edit`. In the editing view, you can add more components such as workspaces, containers, dependencies, etc. to your sandbox. For details on editing the config, please see [here](https://docs.sandboxes.cloud/docs/templates-setup) After editing, you can save the current sandbox configuration as a `template` so that sandbox created in future can use these templates. Or click `Apply` to save the config to the current sandbox. For more information, please see [Standalone sandbox](https://docs.sandboxes.cloud/docs/standalone-sandbox). --- # Source: https://docs.sandboxes.cloud/docs/suspend-and-resume.md ## Suspend and resume In this page, we talk about an important feature Crafting Sandbox offers, sandbox suspension. In order to leverage powerful machines for development or running multiple services end-to-end on cloud, it usually requires a lot of computational resources such as CPU and Memory. For saving these resources in the idle time, Crafting platform supports *activity-based auto suspension*. As a user, you may notice your sandbox is suspended after some time of inactivity. ## What are persisted in the sandbox during suspension? During suspension, the dev containers that runs the workloads in the suspended sandbox is not running on machines to save resources. But unlike production stateless services in containers or some other ephemeral solutions, **all the file system state in the sandbox is saved** on persistent volumes for Crafting Sandbox. It means that you can pick up where you were with all the following parts exactly like they were before sandbox suspension. * Source code checkout in your workspaces * Local file system in your workspaces, including your home directly and root file system * Logs from the services running in all your workspaces * Files for containers and dependencies, e.g., data in your database services. * All sandbox configurations Note that all the in-memory state, however, will not be saved during suspension and all the service processes will be restarted after the sandbox is resumed. ## How to suspend / resume a sandbox? Like mentioned before, the sandbox is auto-suspended when there is no activity with it. Basically everything that a developer can interact with a sandbox is considered activity, including: * Web IDE session * SSH session * File sync session * Local desktop IDE connection to the sandbox session * Endpoint access to the sandbox, from mobile app, web frontend, external API, etc. The auto-suspension time threshold is 30 minutes idle time if you are using Crafting SaaS. If you are using Crafting Self-hosted, it can be set by your admin. You can manually suspend a sandbox from the sandbox page To resume a suspended sandbox, you can resume it from the sandbox page as well. But in practice, most activities on the sandbox will either automatically resume it or prompt you to resume it in UI. ## Pin a sandbox Sometimes a sandbox is needed for demo purpose and is meant to have long period of inactivity. For that usage, you can choose to `pin` a sandbox to avoid it being suspended. Given pinned sandbox will always consume computational resources, there is a organizational level setting to control a limit on how many sandboxes can be pinned. You can also pin a sandbox via a CLI command ```shell $ cs sandbox [pin|unpin] [SANDBOX-NAME] ``` the `SANDBOX-NAME` can be omitted if the command is run in the sandbox to pin. --- # Source: https://docs.sandboxes.cloud/docs/template-builder.md ## Template builder wizard This page talks about how to use the template builder provided by Crafting to quickly setup ground work for a template. To start a new template, click the `New Template` button on the `Templates` page In this page, there lists a few ways to start building a template: * **Using samples**: It's best fit for starting with some simple demo codebases with one frontend and one backend + database. It supports several common technologies to let you create a working template very quickly. * **Template builder wizard**: It has a quick wizard to walk you through some basic steps. We will mainly talk about that in the rest of this section. * **From scratch**: This option lets you directly get into the template builder with an empty template. You can directly change the yaml file based on the reference in [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition) ## Select source code repositories First step in the template builder wizard is to select source repos that you want to work on. With GitHub app integration, you can select the repos here directly. Without it, you can input the repo manually here and optionally select a default branch, like shown above. Crafting can test whether the [Git Access](https://docs.sandboxes.cloud/docs/git-access) is set properly for the repos listed here. We recommend test the repo access before continue to next step. ## Select dependencies After selecting source code repos, you can select what dependency services needed in your development environment. Crafting supports a number of commonly used services as built-in dependencies. For a complete built-in dependency list, please see the `Resources -> Dependencies` page. If you are not able to find the dependency service you need, you can always added as a custom container later in the process. Please see [Setup containers and dependencies](https://docs.sandboxes.cloud/docs/containers-dependencies-setup) for more details. ## Select tool packages The next step in the wizard is to select what tool packages such as (nodejs, jdk, etc.) to install on the workspaces. Installing them will override the system default. For your convenience, Crafting also has built-in support for a range of tool packages as listed in the page `Resources -> Packages` If your code depends on other tool packages to setup and build, you have `sudo` permission on your workspace and you can always install them later via `sudo apt install` as needed and persist the setup. See [Setup workspaces](https://docs.sandboxes.cloud/docs/workspaces-setup) for more information. Here you are asked to specify a name for the `Template Builder Sandbox` that is going to be created next to assist you building the template. ## Template builder sandbox ![Template Builder Sandbox](https://files.readme.io/3471945-image.png) After clicking next from the last step of the wizard, Crafting system will create a `Template Builder Sandbox` (also known as `Standalone Sandbox`) for you, which should include all the settings you have input in the wizard, i.e., it will checkout the code, setup the dependencies, and install the tool packages. Please continue at [Standalone sandbox](https://docs.sandboxes.cloud/docs/standalone-sandbox) and [Setup workspaces](https://docs.sandboxes.cloud/docs/workspaces-setup) for instructions on next steps. --- # Source: https://docs.sandboxes.cloud/docs/templates-best-practices.md ## Checklist and best practices for templates Here, to summarize, we provide a checklist for key points in setting up `Templates` to standardize dev environments for your team on Crafting. Depending on your specific need, you don't necessarily need to have everything in this list, but it's good to check them to find opportunities for optimizing the dev experience. * \[ ] Setup workspaces with automated source code checkout (instructions [here](https://docs.sandboxes.cloud/docs/workspaces-setup#add-code-checkouts)) * \[ ] Setup libraries and packages needed by the source code, and persist them in snapshots (instructions [here](https://docs.sandboxes.cloud/docs/workspaces-setup#install-required-system-packages) and [here](https://docs.sandboxes.cloud/docs/workspaces-setup#persist-packages-and-libraries-setup-with-snapshots) ) * \[ ] Setup automated build and service launch in workspaces in [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest) (instructions [here](https://docs.sandboxes.cloud/docs/workspaces-setup#build-and-launch-service-and-setup-automation-in-repo-manifest)) * \[ ] If needed, setup environment variables and initialization scripts to further customize the workspace (instructions [here](https://docs.sandboxes.cloud/docs/#setup-environment-variables) and [here](https://docs.sandboxes.cloud/docs/workspaces-setup#initialization-scripts-for-workspace-setup)) * \[ ] Setup dependency services and custom containers to run together with your workspaces, and automate the loading of test dataset (instructions [here](https://docs.sandboxes.cloud/docs/containers-dependencies-setup)) * \[ ] Setup endpoints for external access to the services running in sandbox (instructions [here](https://docs.sandboxes.cloud/docs/network-setup#setup-endpoints)) * \[ ] If needed, setup resources to represent cloud resources or Kubernetes namespaces (instructions [here](https://docs.sandboxes.cloud/docs/resources-setup), [here](https://docs.sandboxes.cloud/docs/cloud-resources-setup), and [here](https://docs.sandboxes.cloud/docs/kubernetes-setup)) * \[ ] If needed, setup `secrets` to manage development credentials (see [here](https://docs.sandboxes.cloud/docs/secrets)) * \[ ] If needed, setup an instruction in mark-down for your team talking about how they can use the sandbox (see [here](https://docs.sandboxes.cloud/docs/home-screen-message-and-sandbox-instruction#sandbox-instructions)) In the remainder of this page, we will discuss some best practices for using templates: * [Best practice for managing the templates](#best-practice-for-managing-the-templates) * [Best practice for snapshots](#best-practice-for-snapshots) * [Create snapshots using code](#create-snapshots-using-code) * [Snapshots naming convention](#snapshots-naming-convention) * [Save VS Code settings in snapshots](#save-vs-code-settings-in-snapshots) ## Best practice for managing the templates For any non-trivial templates, we strongly recommend storing them in source repositories and manage them "config-as-code" in the YAML format. Crafting allows editing the template directly in the YAML format so that pasting an existing YAML config into a template for testing its validity is straightforward. See [Sandbox Definition](https://docs.sandboxes.cloud/docs/sandbox-definition) for the details in how to define a template. Crafting does not require any particular way to store them but the following practices are good candidates: * If your code is mono-repo or you have a "main" repo, you could store the template somewhere in there, or * If you have a separate "dev ops" repo where you store a long of shared configurations, you could store the template there as well, or * you can create a separate repo just for storing the template. Given it is more convenient to edit and test template directly on Crafting web console, it's reasonable to iterate quickly without storing it at first for setting up something new. But once it gets to a good shape, it's best practice to store the YAML definition somewhere and enforce a process for updating it. In addition, the template depends heavily on [Repo Manifests](https://docs.sandboxes.cloud/docs/repo-manifest) to automate build and launch services from the source code. As the name suggests, it's corresponding to a single repo, so naturally we recommend to store them in their corresponding repo. Even though you can directly define them as part of the template, we strongly recommend they are store in the `.sandbox/manifest.yaml` under the repo for long term maintenance. ## Best practice for snapshots In summary there are four types of snapshots: * **Base snapshot**: taken from a workspace root filesystem, with home directory (`/home`) excluded; (see [Setup workspaces](https://docs.sandboxes.cloud/docs/workspaces-setup#persist-packages-and-libraries-setup-with-snapshots)) * **Home snapshot**: taken from the home directory of a workspace owner (`/home/owner`) using the include/exclude list explicitly; (see [Setup workspaces](https://docs.sandboxes.cloud/docs/workspaces-setup#persist-packages-and-libraries-setup-with-snapshots)) * **Dependency/Container snapshot**: taken from the data directory of a dependency service, or a persistent volume mounted on container; (see ) * **Personal snapshot**: a snapshot containing personalized configurations and can be applied to the home directory (`/home/owner`) for every workspace in newly created sandboxes. (see [Personalize your sandbox](https://docs.sandboxes.cloud/docs/personalize)) ### Create snapshots using code To have a more reproducible and manageable process to track what's inside each snapshot, we recommend using a script to create them. In the script, you can run things like `sudo apt install`, etc. to install the things needed. And the script should be checked-in as code for source control. When an update of snapshot is needed, instead of just adding the packages and re-take the snapshot, you can: 1. Create a workspace without the snapshot; 2. Run the updated scripts (store in source repo with code review) to set files 3. Re-create the snapshot ### Snapshots naming convention All snapshots share the same namespace regardless of the type. It's recommend prefixing the snapshot type in the name to avoid conflicts, for example: * Base Snapshots are named as `base-NAME-REV`. * Home Snapshots are named as `home-NAME-REV`. * Dependency Snapshots are named as `SERVICE-TYPE-NAME-REV`. The `NAME` will be defined based on the purpose of the Snapshot, and `REV` can be anything indicating a revision, for example a date in the format of `YYYYMMDD` or a monotonic version number etc. Some examples: * `base-frontend-20211201` * `base-backend-r1` * `home-frontend-20211201` * `mysql-dev-20211201` * `mysql-test-r2` If there are multiple sub-teams or sub-projects, a prefix can be added, e.g. * `project1-base-backend-20211201` * `team-a-home-backend-2` Snapshots created for personal use can prefix the user name, e.g. * `alan-home-backend-1` ### Save VS Code settings in snapshots It's helpful to put VS Code settings and extensions in snapshots (home or personal). The system supports VS Code web (in the browser) or desktop VS Code connecting over SSH. And they are using different folders for the settings and extensions. Here's the base folder of different VS Code editions: * VS Code used in Web IDE: `~/.vscode-remote` * Microsoft VS Code Desktop: `~/.vscode-server` * Open Source VS Code Desktop: `~/.vscode-server-oss` The following subfolders (or files) contain useful configurations that can be put in a snapshot: * `extensions`: all installed extensions, so the whole folder can be included in a snapshot; * `data/Machine/settings.json`: the per-machine settings. It can be included in a personal snapshot. ```text ~/.snapshot/includes.txt .vscode-remote/extensions .vscode-server/extensions ``` ```text ~/.snapshot.personal/includes.txt .vscode-remote/data/Machine/settings.json .vscode-server/data/Machine/settings.json ``` --- # Source: https://docs.sandboxes.cloud/docs/templates-setup.md ## Setup Templates for Dev Environments Standardized and replicable development environments is critical for maximizing the team's productivity and minimizing the maintenance overhead. Crafting offers powerful `Template System` for engineering teams define and evolve their dev environments centrally and in a managed way. ### What are templates On Crafting platform, a `Template` is a pre-set definition of a sandbox setup that is shared within a team and can be used to create sandboxes. Since a `template` is often used to represent the entire app end-to-end, it's also called `app`. Since a `sandbox` represents a full-fledged end-to-end development environment, including potentially multiple `workspaces`, `containers`, `dependencies`, `resources`, `endpoints`, etc., setting it up from scratch takes a decent amount of effort. A `template` saves the setup effort and makes new sandbox creation according to a common standard as easy as one click. ### Why using templates In short, the key reason to use templates is for *manageability of dev environments*. For individual developers working on a small side project, templating may be an overkill, but for a team of developers working on an end-to-end product, setting a shared standardized dev environment with template is essential for productivity. Specifically, using templates have following benefits: * **Robustness**: Whole team is able to share common configurations and if anything broken in the dev environment, it's quickly detected and fixed. If anything breaks in one sandbox, just creating a new one with the template to get a fresh dev environment that is ready to code. * **Consistent**: Not only the developers have their dev environments consistent with each other, the common dev environments can also be made more consistent with production. * **Quick onboarding**: New members to the team can get started right away without going through a lengthy environment setup step. * **Easy to update**: For applying patches and upgrading libraries, there is no need for everyone to update their environments individually. One person makes the update in template, and pushes to everyone using the same template. * **Collaboration**: Using a common base of setup reduces communication gap and allows team members to share best practices and trouble-shoot easily. In summary, **the more standardized the dev environment is, the more productive the team is**. Note that Crafting allows each developer apply their own layer of customization on top of shared templates in a repeatable way. For different teams with different dev environments needs, we suggest to use different templates so that each template is best fit for its purpose. ## Outline of this section In remainder of this section, we go though the steps for setting up a template end-to-end * [Template builder wizard](https://docs.sandboxes.cloud/docs/template-builder): how to use the template builder wizard provided by Crafting to quickly setup ground work for a template. * [Standalone sandbox](https://docs.sandboxes.cloud/docs/standalone-sandbox): basics of the `standalone sandbox` we use for building templates, also known as `template builder sandbox`. * [Setup workspaces](https://docs.sandboxes.cloud/docs/workspaces-setup): steps to setup each dev container (`workspace`) to make it ready to code. * [Setup containers and dependencies](https://docs.sandboxes.cloud/docs/containers-dependencies-setup): how to setup built-in dependencies such as `Postgres`, `Redis`, `ElasticSearch`, etc., and custom containers to support your services running in `workspaces`. * [Network configuration and endpoints](https://docs.sandboxes.cloud/docs/network-setup): how to let components in the sandbox communicate with each other and how to access services running in the sandbox from outside via `endpoints`. * [Integrate resources](https://docs.sandboxes.cloud/docs/resources-setup): how to integrate external resources including per-dev namespaces in Kubernetes cluster and cloud native resources from cloud provider, such as `RDS`, `Lambda`. * [Checklist and best practices for templates](https://docs.sandboxes.cloud/docs/templates-best-practices): putting everything together and best practices for managing templates. --- # Source: https://docs.sandboxes.cloud/docs/use-case-collaboration.md ## Team Collaboration, Local or Remote In this section, we will describe the use cases of Crafting for facilitate collaboration in development teams, no matter that are working in the same office or remotely across the globe. Collaboration is always a key factor to the overall performance of development teams. A collaborative engineering team is able to output much more in much higher quality than a team with everyone working in silos. In recent years, there is an increasing trend for distributed development teams, which adds barriers for effective collaboration. With Crafting, the whole team can leverage the online dev environments to collaborate much better no matter where they are. In the rest of this section, we are going to talk about several examples related to team collaboration: * [Pair programming](#pair-programming) * [Frontend backend integration](#frontend-backend-integration) * [Live troubleshooting in context](#live-troubleshooting-in-context) * [QA testing and feedback to developers](#qa-testing-and-feedback-to-developers) * [Demo to customers](#demo-to-customers) ## Pair programming Multiple developers sometimes need to work on a single piece of code in order to leverage each other's knowledge, brainstorm solutions, or simply avoid making mistakes in critical points. Traditionally it requires the developers to sit together looking at the same monitor, which can not happen with they work from different geographical locations. With Crafting, the codebase is in an online workspace where all the developers from the same team can access simply using their own IDEs or even in a browser window with Web IDE. This way on Crafting platform, one developer can see the code modification live from their view and even make modification one after another. ## Frontend backend integration For build a feature end-to-end, it often involves both frontend engineers and backend engineers working on their corresponding parts of the code. To make sure the code works well together, a frontend engineer often needs to have a backend version with the new APIs to support the feature. That sometimes becomes a problem when the frontend engineer is not really familiar about how to make the backend run properly. So he usually needs to wait for the backend feature making its way to staging or even production before being able to try it, adding significant delay. Furthermore, if there is any issue with the backend, the frontend engineers can get blocked and again needs to wait for a long time for the fix to slowly make to staging or production before resuming his work. With Crafting, the backend engineer can create a version of sandbox with his code change and pass that to the frontend engineer before it's even merged to main branch. The frontend engineer can point the API calls to the sandbox version and work against that. If there is any issues with the API implementation, the backend engineers can modify the code in the sandbox live and iterate together with the frontend engineers quickly. ## Live troubleshooting in context When a new developer joins a new project, he is not familiar with the codebase and is often hit by some issues that requires troubleshooting from a more experienced developer. In a distributed team, it is usually very hard to describe precisely what went wrong over emails and messages. At the same time, troubleshooting over a zoom call is painful and time consuming. As a result, it's not uncommon to have some team members get blocked frequently without timely troubleshooting and lose productivity. With Crafting, one experienced engineer can hop on another engineer's dev environment and diagnose what exactly went wrong in the code and settings. The experienced engineer can see all the code and is able to run the code in the other engineer's environment to reproduce the problem. This live context makes troubleshooting much easier to perform from remote. ## QA testing and feedback to developers Pre-production QA testing by a QA team is an important step to ensure correctness but it can be challenging to run smoothly. Not only that it's difficult for the remote QA team to create an on-demand environment that reflect the exact version of the product need testing, when the QA team finds an issue, it's often difficult for the developers to reproduce it in their environment in order to fix it. With Crafting, the QA team can test against a sandbox that has the exact version of the product. When they hit some error, the whole environment including testing data can be snapshotted and sent to developers right at the failure point for debugging. The developer can reproduce the issue right in the environment and apply a fix. Then the whole sandbox can be passed back to QA for verification of the fix. It greatly reduces the communication gaps between developers and the QA team. ## Demo to customers To sell a product to customers, the business people usually need to demo a version of the product specially made with a certain data set and specific features. It is not ideal to demo with the production version because the features to demo may be experimental and not ready for production. To make the matter worse, with new information becoming available, it is often desirable to have last minute changes made to the demo in order to make the best pitch. With Crafting, the business development team can have dedicated environments tailored to each customer's use case, with customized data set and settings. And engineers can easily hop on these environments and make live changes and make it reflect immediately in the demo. Collaboration between business functions and developer can be made much smoother. --- # Source: https://docs.sandboxes.cloud/docs/use-case-compose.md ## Scale beyond Docker Compose In this section, we will talk about the use case of how to use Crafting to do multi-service development scaling beyond docker compose. For modern software development, it's common for professional developers to work with a number of services which collectively implement the functionalities of the product. Even when a product only relies on one main service, it is often needed to be supported by multiple dependencies such as PostgreSQL database, Redis cache, etc. In order to have standardized and repeatable dev environments on their local machine, developers often leverage containerization and uses docker (with docker compose) to set up their multi-service dev environments. While it's certainly a step better than ad hoc dev environment, this setup has its own issues: * It still runs all the services using computing resources on a local machine, which limits its scalability and causes slowness when the number of services needed are beyond a threshold. * Docker adds a layer which introduces further CPU and memory overhead and sometimes incompatibility with local OS. * Even containerized, the service running in docker is still subject to local CPU architecture, e.g., MacBook Pro M1 chip is arm architecture, which may be inconsistent with the production. In the rest of this section, we discuss the two ways how people use docker compose for their dev environments and point out how to upgrade to Crafting to solve the issues mentioned above. * [Everything in container for end-to-end environment setup](#everything-in-container-for-end-to-end-environment-setup) * [Only dependencies in container to support local service dev](#only-dependencies-in-container-to-support-local-service-dev) ## Everything in container for end-to-end environment setup One common way developers do to set up their dev environments with docker compose is to put every service they need in container and set up an end-to-end environment. The advantage of this setup is that everything is containerized and standardized, ensure end-to-end reliability, with the downside that developers need to edit code and run services inside containers, which can be tricky to set up. Crafting can offer a near drop-in replacement for the setup given everything is already containerized and standardized, specifically: * You can convert your the docker-based configuration (docker files and docker compose file) to Crafting `template` in a semi-automated way. This way, you can create standardized dev environments (`sandboxes`) on cloud to have everything end-to-end. * As services are using service name + port to connect to each other in docker compose, the same pattern is used in Crafting sandbox on the overlay network. * All the services will be running on cloud with production-like containers and do not consume local resource anymore. You can access the Internet facing endpoints for running the whole deployment end-to-end. * For code editing, you can do one of the following ways based on your preference: * SSH to remote codebase and edit with terminal: ideal for text-based editors like vim, emacs, nano, etc. * Direct IDE access to remote codebase: Crafting supports VS code, and JetBrains IDEs such as IntelliJ, RubyMine, PyCharm, GoLand, etc. * Sync between local folder and cloud workspace: Crafting supports automatic file sync between any local folder and folder in cloud workspace, to make sure the file changes you made on your local file propagates to your cloud workspace as you edit the file. * Local mount of remote folders: Crafting supports sshfs to mount your folders in your cloud workspace to your local file system. Using this approach, you can have a Crafting environment quickly set up based on your existing docker-based configurations and scale beyond the limits of a single machine. ## Only dependencies in container to support local service dev Another common approach for developers to do is to put dependency services in containers while leaving the target service they work on and its codebase outside of containers. The advantage of this approach is that coding editing and running of the target service is done local natively without any indirection, and it can access its dependencies using port mapping between local and in-container. For this type of setup, Crafting's hybrid development model fits perfectly as a drop-in replacement, specifically: * You can launch a Crafting sandbox on cloud to host the services that your target service depends on, directly based on your docker compose file, with a single command of `cs docker-compose up`. The services running on cloud can leverage a pool of VMs, which scales well as your app are having more and more services. * The local port mapping in docker compose can easily be handled as port forwarding on Crafting to the services running in sandbox. Your target service running locally can access its dependencies the same way as when they are run by docker compose. * You still code, run, test, and debug your target service locally with your favorite desktop IDE that you are already using. The main difference is that instead of consuming your local resources, the dependencies consumes resources on cloud, which frees up your local CPU and memory for your IDE and local testing. Using this hybrid approach, you can offload the heavy dependencies to cloud machines to support a great local dev experiences. --- # Overcome Local Machine Slowness In this section, we discuss the use case of using Crafting to overcome local machine slowness in development. As the complexity increases in modern software development, the codebase becomes larger and larger and there are more and more services for developers to handle. The decades-old practice of solely relying local machine for software development is no longer fitting the needs of developers. Top issues developers facing nowadays with dev environments that hurts their productivity includes: * With a large codebase, it takes significant amount of time to just index the code with a powerful IDE and start coding locally. * It takes a long time to build the code locally and/or run unit tests, which consumes a lot of CPU and memory. * The services a developer needs to run locally consumes too much memory and local IDE runs super slowly because of that. * When using remote machines for development, insufficient management of the cloud VMs result in huge waste in computing resources and poor developer experience. In the rest of this section, we talk about the following topics: * [Benefits of powerful cloud machines](#benefits-of-powerful-cloud-machines) * [Optimize resource utilization and minimize development cost](#optimize-resource-utilization-and-minimize-development-cost) * [Hybrid development for combining local and cloud](hybrid-development-for-combining-local-and-cloud) ## Benefits of powerful cloud machines We all know that a local desktop or laptop only has limited computation power. Starting from last century when PC becomes prevalent, developers have been mainly using their local machines for development. We are so used to the idea of coding locally and don't put enough thoughts into it even though the age of cloud has come to us. Almost every productivity software we use are on cloud, and yet we are used to rely on the local machine based dev environments, which is clearly insufficient and suboptimal in today's world. Even with powerful laptops that costs several thousand dollars, they can only alleviate some pain in the development and can not scale much further beyond that. Needless to say more powerful means larger power consumption, heavier battery, and lots of heating, all leading to inconvenience to use. It's completely going against current technology trend of mobility, connectivity, and convenience. Nowadays, cloud machines can easily scale beyond 64 cores or 128 cores per machine with more than 256 GB memory or even 1 TB memory to use, which means a cloud machine can potentially offer much more computational resources than what a local machine can provide. Beyond a single cloud machine, Crafting offers better automation possibility that one can leverage a pool of machines to perform the heavy task. While copying files such as built images and artifacts between local and cloud are often limited by local bandwidth, copying them between cloud machines are often lightening fast, which brings more potentials of speedup. With Crafting, the developers can really get a ready-to-code environment quickly online. For large number of services or multiple codebases, Crafting's distributed model scales beyond a single machine to as much as the cloud can offer. Please see [Scale beyond Docker Compose](https://docs.sandboxes.cloud/docs/use-case-compose) and [Kubernetes Development and Testing](https://docs.sandboxes.cloud/docs/use-case-kubernetes) for details. Furthermore, using a homogenous cloud dev environment greatly improves its stability and maintenance, as shown in [Maintainable Dev Environments](https://docs.sandboxes.cloud/docs/use-case-standardization) ## Optimize resource utilization and minimize development cost No matter with local machines or cloud VMs, the computing resource is not free, and the insufficient utilization can easily cause huge waste. For example, a powerful MacBook Pro costs $4000+ USD a piece, but really how often are they used to its full capacity? A powerful cloud machine also incurs a high cost monthly, but in reality many of them are just allocated ready without doing any real computation for days, weeks, or even months. In order to leverage more powerful machines on cloud for development, the optimization of utilization is very critical. Fortunately, Crafting tailors towards development use case, and provides huge opportunities in resource optimization. As shown in the table below, combining *Activity-based auto suspension* and *Sharing VM resources among multiple dev containers*, it can achieve over 90% of resource saving. | Resource saving techniques | Estimated resource saving | | :------------------------------------------------- | :------------------------ | | Activity-based auto suspension | 70-80% | | Sharing VM resources among multiple dev containers | 60-70% | | Combined | 90%+ | ### Activity-based auto suspension Developers are human-beings and we don't work 24/7. Even during the working hours, we often are engaging with other activities such as meetings, code reviews, etc. which don't require us to code on a computer. Having dev environments running on expensive machines during these "off-times" are obviously wasteful. Crafting monitors activities on the sandbox and can detect whether it is actively used by a developer. If it's idle for some customizable period of time, it can automatically suspend the sandbox while saving ongoing work in the workspaces in the files system. This frees up the machine resource in the machine pool, which can be automatically scaled down or up depending on the load. The auto-suspension threshold on Crafting can be set very aggressively, e.g. 30 minutes or even 15 minutes, thanks to its capability of *full state saving* and *fast resuming*: * Unlike many ephemeral environments which destroys most of the files and needing developers to carefully backup their change, Crafting saves everything in persisted volume and allows developers to pick up where they left. * With full state saved on persistent volume, resuming a suspended sandbox happens very quickly, without the need of reinitialize everything from the scratch. The resuming of the sandbox can be triggered manually or by Crafting when it detects developer attempts to access a suspended sandbox. It offers a very smooth experience to developers. In our experience, the activity based auto-suspension can save between 70% to 80% of the computing resource in regular use patterns. ### Sharing VM resources among multiple dev containers Even during the time when developers code actively on a machine, they don't need the peak performance of the machine all the time. In fact, during most coding hours, all we need is some relatively lightweight text editor functionality even running with a powerful IDE. Only very occasionally developers engage heavy operations on their dev machine, such as building code, indexing new code with IDE, running tests, etc. By default, Crafting organizes workspaces as dev containers and allow multiple containers running on a shared powerful VM. Each dev container sees the resource for the whole VM and can leverage its peak performance when needed, while allowing other dev containers to use the shared resources when it doesn't engage heavy operations. Given developers heavy operations typically run in burst and could finish quickly if the peak performance of the machine is high, this model can effectively save significant amount of resource without hurting developer experience. In our experience, this resource sharing can save between 60% to 70% of the computing resource in most common cases. ## Hybrid development for combining local and cloud With Crafting, developers don't need to adopt a new workflow in order to take advantage of power of the cloud to overcome slowness on their local machine. Instead, they can start the first step by using the hybrid development model which combines the familiar local environment and power of the cloud. Here we talk about a few common ways that people who still prefer coding locally use Crafting for hybrid development. You can pick whichever way that fits your need the best. ### Code locally, build and run remotely with code sync Sometimes just writing code locally is not too bad with the local resource available, but it takes too much time for building the code and running the unit/integration tests just with the local machine. In this case, Crafting offers a convenient code sync functionality to keep your local code folder and the code folder in your Crafting workspace in sync. This way, you can offload the heavy operations of building and running tests onto cloud machines while still coding locally with your familiar process and IDEs. See [here](https://docs.sandboxes.cloud/docs/code-sync) for details. ### Code and run locally, with context services on remote with port forwarding Sometimes it's even fine for coding and running a single service on local machine, but it becomes a problem when end-to-end testing of the flow requires many microservices to run alongside your service. In this case, you can: * Code and run the service your work on locally * Use Crafting to launch all other services in a sandbox on cloud * Use Crafting's two-way traffic forwarding functionality to virtually plugin your local service into the sandbox, making it able to call and available to be called by other services. This way, all the context services do not consume your valuable local CPU and memory, solving your local slowness issue. See [here](https://docs.sandboxes.cloud/docs/port-forwarding) for details. --- # Kubernetes Development and Testing In this section, we will show how Crafting helps development and testing for apps and services running on Kubernetes. The key challenges for developers working with Kubernetes services are: * It is difficult for a developer to emulate a Kubernetes environment on their local dev environment for the service to run on. * There is often no on-demand per-developer Kubernetes environment available for developers to use as context for develop their component. * There is a long iteration cycle involved to test a service in a proper Kubernetes environment, which typically involves coding locally, build container, upload container, relaunch Kubernetes. The main benefits of using Crafting for Kubernetes include: * Develop in a production-like Kubernetes environment and iterate quickly * Use the same Kubernetes config files from production/staging * Per-developer self-contained Kubernetes environments running product end-to-end * Interactively code and see results immediately without re-launching containers * Launched on-demand with one-click and auto-suspended when not used The rest of this section has following topics: * [Create a per-developer namespace to run services on Kubernetes](#create-a-per-developer-namespace-to-run-services-on-kubernetes) * [Interactive debug in Kubernetes context via traffic interception](#interactive-debug-in-kubernetes-context-via-traffic-interception) * [Manage lifecycle of Kubernetes namespaces with sandbox](#manage-lifecycle-of-kubernetes-namespaces-with-sandbox) * [Setup Checklist](#setup-checklist) A video demo can be found [here](https://bit.ly/crafting-democ4) ## Create a per-developer namespace to run services on Kubernetes With Crafting, you can create your own self-contained namespaces on Kubernetes to run services for development and testing. You can have the whole end-to-end context with all the services running and have the service you are working on running alongside with them for high-fidelity testing. As shown in the diagram above, you can use your Crafting Sandbox to launch a per-sandbox Kubernetes namespace on-demand, automatically with pre-set configurations. The newly launched namespace is "bound" with the sandbox and multiple sandboxes could have their corresponding namespaces launched and running at the same time, helping you and your teammate develop in parallel. You can reuse your production or staging Kubernetes config such as Kubernetes manifests or helm charts with little modification in order to launch services that has high production fidelity. The Kubernetes cluster for running your namespaces can be a new cluster or the staging cluster you already have configured properly, with access to all the cloud resources you already use. Now let's walk through the process step by step. From the sandbox, we can see that the deployment in the per-sandbox Kubernetes namespace is represented as a `resource` in the sandbox. Remember we earlier mentioned that on Crafting platform a `resource` represents an external entity that are managed together with the workspaces in a sandbox. You can customize the `resource` to create the namespaces and launch services in it. That way, individual developers don't need to learn how Kubernetes works, everything is configured properly to offer a one-click experience. In this example, we are using the example Kubernetes app from Google Cloud Platform, whose git repo can be found [here](https://github.com/GoogleCloudPlatform/microservices-demo). Now, you can create the sandbox with the development workspace and Kubernetes resource by clicking **LAUNCH**. After launching sandbox, as the development workspace is prepared, a namespace in the connected Kubernetes cluster is also created with all the services launched. It is controlled by the scripts in the resource, which you can customize to do any additional setup. When the sandbox is launched ready, we can go into the Web IDE terminal and see that all the services in that Kubernetes namespace is up-and-running, visible to `kubectl` command. At this point, you can run the app end-to-end on Kubernetes. You can also let an sandbox `endpoint` point to a Kubernetes Service so that you can access the deployment as a user directly from a sandbox URL, as shown above. ## Interactive debug in Kubernetes context via traffic interception Using Crafting, when you want to test your dev version of a service in the Kubernetes context, you don't need to spend time building container, updating cluster, etc. Instead, you can directly "replace" the service running in Kubernetes namespace with the dev version running in sandbox via traffic interception. This way, you can *instantly* see your code change without the need of rebuilding container or modifying Kubernetes config, greatly shortened iteration cycle. As shown above, the traffic interception is a two-way process. Firstly, all the incoming traffic going into the service you intercept is routed to the corresponding workspace, where you run your dev version of the service to debug. Secondly, the workspace is virtually added to the Kubernetes network so that all the outgoing traffic from your dev version to other services can hit their intended target directly using the same DNS names in the cluster or even IP addresses of Pods. Multiple interceptions can be done at the same time to the same or different workspaces, enabling integration debugging on multiple services or collaboration between teammates. In advanced mode, interception can be done conditionally so that multiple team members can share a large Kubernetes deployment as base, and only redirect their own testing traffic to their corresponding workspaces. That is ideal for a large team working on a large number of microservices. Now let's see how it works step by step As shown above, you can initiate interception directly from sandbox page, by selecting cluster, namespace, and remote workload. In the next step, you select which workspace in the sandbox you want to intercept to and which port. After interception is set, you can see the ongoing active interception in the sandbox page showing the details of the interception status, you can stop the interception at any time here too. Now, let's open the Web IDE and make some modification to the dev version of the service running in the workspace. As shown above, you can see that we added one line of log in the `PlaceOrder` function in the `checkout` service, and we also placed a breakpoint on the line below our new logging. Now, we run the dev version of the service on the workspace in debug mode and can go to the product page and initiate a product flow. Given the we are replacing the `checkout` service and changed the code in the `PlaceOrder` function, we will trigger that flow by going to the shopping cart and place an order. As we click the "Place Order" button, we can see that the breakpoint in our dev version of the `checkout` service is hit, indicating the traffic is intercepted from the `checkout` service in the Kubernetes namespace to our dev workspace in the sandbox. The log line we added is also effective and prints the log in there. When we click continue from the debugger, our dev version of `checkout` service will call other services like `payment` and `email` in the Kubernetes namespace to finish the flow. With traffic interception, you can code your dev version of the service in context and instantly test it with other services running in Kubernetes. By avoiding the repeated "build, upload, test" process, and debugging in place, your iteration speed is greatly improved! ## Manage lifecycle of Kubernetes namespaces with sandbox A common issue for having on-demand per-developer Kubernetes namespace is how to manage their lifecycles. If not properly managed, there might be a lot of dangling namespaces running in the cluster, wasting resources. Crafting helps manage the namespaces by combining their lifecycle with sandboxes. You can manage the lifecycle of the Kubernetes namespace by managing the sandbox itself. Specifically, when the sandbox is suspended, all the services in the namespace can be scaled to 0 replicas, saving the resources, and when it's resumed after suspension, everything is scaled back quickly. When the sandbox is deleted, the corresponding namespace is destroyed. For example, from the sandbox page (shown above), if we suspend a sandbox with Kubernetes resource, it will scale down the services in the corresponding namespace as shown below. With this coupling mechanism and Crafting Sandbox's auto-suspension (in the order of hours) and auto-cleanup (in the orders of days) capability, you can make sure that if the developer leaves the namespace idle and unused, it would stop consuming resource very soon and eventually cleaned up from the Kubernetes cluster. ## Setup Checklist To setup Crafting for Kubernetes, the following items are required: * \[ ] **The Crafting system (SaaS or self-hosted) and your account on Crafting** Firstly, you need to have a working system of Crafting platform that you can access via your account. There are three options: [Crafting Express](https://docs.sandboxes.cloud/docs/crafting-express), [Crafting SaaS](https://docs.sandboxes.cloud/docs/crafting-saas), and [Crafting Enterprise](https://docs.sandboxes.cloud/docs/crafting-enterprise) * \[ ] **Connect your Kubernetes cluster to Crafting** Secondly, you need to connect your Kubernetes cluster to Crafting. Assuming you already have your local `kubectl` setup to access your Kubernetes cluster, this can done by a simple command with `cs infra connect kubernetes` and give the cluster a name on Crafting. With that command, Crafting will not modify any config of your Kubernetes cluster, instead it will install a Crafting agent there to do everything it needs. After this step, you can already do traffic interception to debug any existing Kubernetes workloads in the cluster with a sandbox. Please see [Command Line Tool](https://docs.sandboxes.cloud/docs/command-line-tool#connect-a-kubernetes-cluster) for details * \[ ] **Setup`kubectl` access from sandbox** If using `kubectl` to access the cluster is desired in day-to-day development in a workspaces on Crafting, you need to setup a `kubeconfig` file (or via `KUBECONFIG` environment variable). The Crafting Kubernetes agent provides a proxy allowing direct cluster access, or you can setup access via cloud provider. Please see [Setup for Kubernetes](https://docs.sandboxes.cloud/docs/kubernetes-setup) for detailed instructions. * \[ ] **Configure your sandbox with Kubernetes resource and lifecycle scripts** The last step is to step up the resource model in the sandbox. Here you can specify naming convention of the namespaces and specify what exactly happens what needs to happen when a sandbox is created/suspended/resumed/deleted. Please see [here](https://docs.sandboxes.cloud/docs/kubernetes-setup#orchestrate-deployment-of-per-dev-namespace-from-sandbox) for details. --- # Code Change (PR) Preview In this section, we are going to illustrate the use case of previewing code changes (Pull Requests) with Crafting. The main benefits of Crafting in this use case are: * Test any code change end-to-end with a self-contained environment before merge * Leverage existing production config to create production like preview * Environments are ephemeral, auto suspend and auto cleanup * Test from mobile apps, web frontend, internet-facing API, by teammates or partners * Debug live in the environment and fixes are applied instantly without redeploy The rest of this section has following topics: * [Create an environment on-demand from Pull Requests](#create-an-environment-on-demand-from-pull-requests) * [Run your app end-to-end a self-contained environment](#run-your-app-end-to-end-a-self-contained-environment) * [View logs, debug, and iterate](#view-logs-debug-and-iterate) * [Replace a service on the cloud with a local service](#replace-a-service-on-the-cloud-with-a-local-service) * [Setup checklist](#setup-checklist) ## Create an environment on-demand from Pull Requests To create an environment (`Sandbox`) for preview, you select it from the sandbox creation page and select which code branch or open Pull Request as the version of the code for the corresponding workspace, as shown below. Alternatively, we recommend to use an integration with your Git provider and embed a link in the Pull Request via some automation, e.g. GitHub action, which allows a sandbox with the code branch for the PR to be created on demand when the user clicks the link. The following shows an example Github action integration. Once a sandbox is launched, the Crafting platform would follow the template and create all components in corresponding containers, run the services, and setup Internet accessible endpoints for you to test. You would be able to see all the logs during this process and debug if there is any errors in build and run. ## Run your app end-to-end a self-contained environment After the sandbox is launched ready, you can run your app end-to-end by accessing the entry services via any `endpoint`, as shown in the following. An endpoint is an URL that are routed to a port in a container in the sandbox, where your service is listening to. For example, the above `frontend` endpoint (`https://frontend--pr21-demo.sandboxes.run`) is pointing to the `frontend` workspace's port 3000, where the frontend server is listening to, and the `api` endpoint is pointing to the `api` workspace's port 3001, where the backend server is listening to. When opening that `frontend` endpoint in browser, the request hits the service running in the sandbox and loads the page, and with proper setup, the API calls made from the frontend also hit the service running in the same sandbox via the `api` endpoint, as shown below. This way, you can do an end-to-end preview with a full self-contained environment for your app. For using mobile apps to drive end-to-end preview, you can have your mobile app client side point to an endpoint from the sandbox, just like pointing to the URL of your production API. The endpoint name is generated predictably from your sandbox name and endpoint name, which you can have a good convention for. ## View logs, debug, and iterate For any service running in the sandbox, you can easily inspect its log. You can choose to hop on the container to tail the log from command line, via `cs log -f` or use web-based log viewer directly from the sandbox view, as shown below. You can also add logging to your service or actually modify the code anyway you like in the sandbox. By opening Web IDE (shown below), you can access the source code and a terminal in the sandbox. Here we can add more logging in the source code, and after restarting the process, via `cs restart`, we use `cs log -f` to tail the log from the terminal When a request hits the sandbox again, the new logging was shown as below. After making a code change, you can commit the code back to the git directly from the terminal in the Web IDE and push to your git repo if needed. This avoids switching environments between sandbox and your local, and allows very quick iterations to try fixes without waiting for redeployment. ## Replace a service on the cloud with a local service In addition to viewing logs and editing code online, you can debug even using your desktop IDE to run a local process and replace a service in the cloud and set breakpoints there. To do that, you need to first use port forwarding to connect your local with the sandbox on cloud. By running `cs portfoward` and selecting the service you want to replace with your local, you tell the Crafting platform to route the traffic that is supposed to hit the service in the sandbox to hit your local machine instead, and it would also handle the port forwarding in the other direction to allow your local to access other services in the sandbox via local ports. Then, you can launch your local process in debug mode from the desktop IDE with your local codebase, and set breakpoints there. When a request hits the endpoint for the sandbox, the traffic is forward to your local port 3001, so the process running in debug mode would run that request, and hit the breakpoint you set. This hybrid mode allows powerful debugging technics to be applied and a great developer experience. ## Setup checklist For setting up Crafting for previewing code changes, following items are needed: * \[ ] **The Crafting system (SaaS or self-hosted) and your account on Crafting** Firstly, you need to have a working system of Crafting platform that you can access via your account. There are three options: [Crafting Express](https://docs.sandboxes.cloud/docs/crafting-express), [Crafting SaaS](https://docs.sandboxes.cloud/docs/crafting-saas), and [Crafting Enterprise](https://docs.sandboxes.cloud/docs/crafting-enterprise) * \[ ] **Your app on Crafting running end-to-end** Secondly, you need to set up your app on Crafting so that the Crafting platform knows how to build your app from source, and run the services in your app end-to-end for a particular version of the code in a sandbox. The required setup varies team by team and app by app. Crafting platform supports a wide range of approaches to run your app, and we can work with you to find the most fitting approach in your case. In general, the trade-offs are between two extremes. * *Use production config, existing build pipeline, and pre-built containers as much as possible.*\ This will generate an environment that has high fidelity as production (or staging if you choose to use your staging config). You can set up Crafting to call your existing build pipeline and fetch built artifacts or containers from your system and run them in sandbox. You can also let Crafting reuse a pre-built production version of artifacts or containers in the services that the code doesn't change. The advantage is that the preview will be very high fidelity. The downside is that the prebuilt version may not have enough support for you to debug when things don't work. * *Run services in dev mode with source code as much as possible.*\ This will generate an environment that is similar to your coding environments with all the dev tools and debug capabilities you have. You can set up Crafting to checkout the code from your source repo, prepare an environment in the workspace for the code to compile and run, and launch services in dev mode. This way, Crafting doesn't need to rely on your build system and uses its own computation resources to prepare the services running in sandbox. The advantage is that the preview is highly customizable and dev-friendly, easy to tune and debug. The downside is that this may still have some gaps to the production. * \[ ] **\[Optional] Git repo integration** If you want to set up a Crafting link to be automatically added for each Pull Request, you need to set it up. Crafting supports auto-launch links to be created programmatically so that a sandbox with specific configs (e.g., which branch of the code to use) can be launched when the link is clicked. You can generate that link in your git automation and post it to the PR. See [Git Service Integration for Preview](https://docs.sandboxes.cloud/docs/git-integration) for details * \[ ] **\[Optional] Kubernetes integration** If you are using Kubernetes, Crafting offers additional powerful support for Kubernetes. Please see [here](https://docs.sandboxes.cloud/docs/use-case-kubernetes) for more information. * \[ ] **\[Optional] Cloud resource integration** If you app uses serverless components such as AWS Lambda, SQS, etc., and you want to have the real cloud native resources running alongside the containers for high fidelity review. You can set up Crafting with your cloud account and create these resources on-demand and manage their lifecycles with sandbox. Please see [Develop with cloud resources](https://docs.sandboxes.cloud/docs/cloud-resources-dev) for more information. --- # Maintainable Dev Environments In this section, we describe the use case of setting up maintainable dev environments for your development team with Crafting. We will start by a deeper dive into the potential issues of local dev environments and point out the best practices you can achieve with Crafting. | Issues on Local Machine | Best Practice with Crafting | | :------------------------------------------------------------------------------------------------------------------------------------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [CPU architecture and operating system consistency issue](#cpu-architecture-and-operating-system-consistency-issue) | [Use standard architecture and dev OS image matching production](#use-standard-architecture-and-dev-os-image-matching-production) | | [High overhead and error-prone process in environment setup](#high-overhead-and-error-prone-process-in-environment-setup) | [Repeatable dev environments, setup on-demand, clean up automatically](#repeatable-dev-environments-setup-on-demand-clean-up-automatically) | | [Dev environment maintenance and frequent breakage](#dev-environment-maintenance-and-frequent-breakage) | [Version controlled update on environments, automatic rollout](#version-controlled-update-on-environments-automatic-rollout) | | [Scalability issues in terms of number of services and developers](#scalability-issues-in-terms-of-number-of-services-and-developers) | [Manage multiple types of dev environments with templates](#manage-multiple-types-of-dev-environments-with-templates), [Centralized monitoring and online trouble-shooting](#centralized-monitoring-and-online-trouble-shooting) | ## Issues with local dev environments As your development team scales, how to make sure the dev environments they use in their day-to-day work maintainable is crucial for their productivity. Since modern software development usually requires developers to piece together several technologies, there are several common pain-points for each engineer relying on local machine for development. ### CPU architecture and operating system consistency issue The first issue local dev environment faces is related to the consistency between local and production. Even though in production, majority of the cloud services are running by CPUs in x86 architecture on top of Linux operating system, the local machines are much more heterogenous. Developers commonly uses MacBook Pro with M1 chip, which is arm architecture with a different instruction set than x86. Programs and libraries often have subtle differences in their behavior, which could raise many compatibility issues. In addition, the most commonly used operating system on developers' laptop is actually not Linux. Many of them are using Mac OS and Windows, which could bring more variables and potential issues into the fold. As a result, there are common issues causing developers unable to run the service they want to code on their local machine, or even worse, have developed features on their local machine, which fail to work in production. Although virtualization and containerization can help to some extent, they are often imperfect and could cause hard-to-reproduce issues. The end solution, is still to have the dev environment consist with production in terms of CPU architecture and operating system. ### High overhead and error-prone process in environment setup Another common issue for local dev environments is the high overhead in setup. It often takes *hours, days, or sometimes a week* for a new engineer to setup their dev environments on their local machine. The process is often based on some documentation on wiki or in codebase, often outdated and/or littered with assumptions of knowing certain tribal knowledge. As a result, even after a long setup, their local dev environment may still miss some important tweaks, which cause future problems. Automation scripts and containerization can certainly alleviate the problem, but still requiring the developer to fiddle with their local machine to apply them, and the consistency issue in CPU and OS mentioned above makes it sometimes hard to deal with. ### Dev environment maintenance and frequent breakage In addition to setup, the maintenance of dev environments is often a troublesome issue. Production system often requires relatively new library versions because of security patch and needed features. When one engineer upgrades a library version and get it to work on his environment, it sometimes breaks everyone else's dev environment. And then everyone needs to stop their work and upgrade their local dev environments following some ad hoc instructions, which may fail to work if the have heterogenous local environment like different OS versions, potentially leading to a cascading upgrade. That's why developers are usually reluctant to update their dev environments, but that makes their environments more outdated and harder to catch up in the future, resulting in a vicious circle. ### Scalability issues in terms of number of services and developers If it is still maintainable with a single service/codebase and a small team, dev environments become very difficult to manage with many services and a large team as the company scales. When there are multiple services to work on, developers with different roles often need to customize their dev environments fitting their roles, making a simple one-dev-image-fits-all approach fail to work. They often need to launch different sets of services and dependencies for their development and have them integrated in a certain way. Consistency issues are naturally introduced when everyone only relies on their local machines. With a large team of developers, it's also very hard for everyone to follow a standard practice on their local machines without proper guardrails. As a result, developers' dev environment may fail due to some mis-operations. When something went wrong, it's often very difficult for the infra team to diagnose the issue on developer local machine. It's also hard for the infra team monitor the health of dev environments for a large team of engineers. ## Best practices for maintainable dev environments with Crafting Facing these issues, now let's talk about the best practice Crafting helps to bring to your dev team. ### Use standard architecture and dev OS image matching production With Crafting, you can easily manage a standard Linux OS image matching the production running on the same CPU architecture like your production machines. Crafting allows you to specify a machine pool in your cloud and use your custom Linux OS image for the dev containers. When all the developers use a homogeneous dev environment align with the production, there is no behavior gaps between development and production, completely resolving the consistency issue. ### Repeatable dev environments, setup on-demand, clean up automatically Based on Crafting platform, your developers can create new dev environments on-demand with a single click, eliminating the long onboarding time of setting up dev environments for new developers in the team. With the automation supported by Crafting, the dev environments are launched ready-to-code. They are pre-installed with all the necessary dev tools and libraries, and even have the proper background process running for testing your code change. Developers can easily create multiple dev environments for developing different features, no longer limited by a single dev environment on their local machine. The inactive dev environments are suspended and recycled automatically by the platform, so that there is no dangling environments wasting computing resources. ### Version controlled update on environments, automatic rollout Regarding upgrading dev environments, Crafting makes it easy by allowing to standardize the process. From dev OS images to versions of dev packages and dependency services, every aspects of dev environments in Crafting is centrally managed and versioned. Any update on any part can be tested first on a sandbox, and saved as a new version of the template for every newly created dev environments to automatically pick up. Developers no longer need to pay special attention to maintain their dev environments on their local machine, and never need to worry about any environment update will break their local, because they can easily create a new environment with latest updates, which is centrally tested and guaranteed to work properly. ### Manage multiple types of dev environments with templates For different teams developing different parts of the overall product, Crafting allows different templates to be used to set up their specific dev environments. For example, if team A only need service X, Y, Z to be running for development, they can create their version of the template to only include these services, instead of including everything. Different flavors of the dev environments makes it flexible to support complex apps with many services and teams. Meanwhile, Crafting allows all the configurations to be managed centrally and composed in parts. This central management allows cross-cutting changes to be made easily for security or compliance reasons. ### Centralized monitoring and online trouble-shooting Since Crafting centralizes the dev environments on cloud, standard practices and guardrails can be easily implemented by the infra team even facing a large number of developers to work with. There will be less breakage due to local errors, and even if something goes wrong, it's relatively easy for the intra team to hop on the cloud container to see what is the exact problem and fix it. In addition, the health of everyone's dev environments can be monitored easily on cloud, allowing the infra team to spot problems and react to issues quickly. --- # Use command line tool Crafting Sandbox CLI, `cs`, provides full-fledged access to the Crafting platform from your local machine, and it also pre-installed in all workspaces in any sandbox. Some advanced functionalities and configurations are only available via the CLI. In this page, we will go over some basics of CLI. For more details, please refer to our [Reference](https://docs.sandboxes.cloud/docs/command-line-tool). ## Download, install, and login From our Web Console, the [Download page](https://sandboxes.cloud/download) provides a simple command with the link to download and install CLI to your system. It supports Linux and MacOS natively. For Windows users, please set up [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install) and use the Linux distribution in WSL. After downloading, please make sure it is in your `PATH` for convenient access. The CLI has auto-update feature. It detects when a new version becomes available and updates itself automatically. Please use `cs version` to see its current version. You need to login so that the CLI can operate on your behalf. You can use `cs login` to explicitly login to an account, or directly run a command which will prompt you to login when needed. ## SSH access to workspaces Simply running `cs ssh` will get you the list of sandboxes in order to select where to login. You can also use the `--workspace` or `-W` option to specify which workspace to login to, in the format of `SANDBOX/WORKLOAD`. Just like normal SSH, you can also run a command remotely with it, or establish an SSH tunnel. ```shell cs ssh ``` If the sandbox is suspended, the `cs ssh` command will first resume it, and then take you to the terminal of the workspace you SSHed into. In the workspace, you will find the source code repository checked out under your home directory with your specified relative path. You can go edit the code or run any git commands to switch branch, push code, etc. In all the workspaces in Sandbox system, you will act as `owner` and have password-less `sudo` access, which enables you to install any packages via `apt install` or do other customizations for your development need. ## Commands to use inside workspaces The CLI `cs` is made available inside each workspace, and you can use it via any terminal (from ssh, from Web IDE, or from your native IDE). You can use it to manage running processes, tail logs, and run automation such as building code, etc. ```shell cs ps # List all processes(daemons) cs up # Start all or specified daemons. cs down # Stop all or specified daemons. cs restart # Restart all or specified daemons. cs log # Tail workspace logs cs build # Run the build commands setup in repo manifest ``` ## More commands Other commonly used commands from your local machine includes: ```shell cs vscode # Launch local VS Code to directly edit code on sandbox cs jetbrains # Launch local Jetbrains IDE to directly edit code on sandbox cs template # Template related commands, create, edit, list, etc. cs sandbox # Sandbox related commands, create, list, etc. cs portforward # Start a port-forwarding session for hybrid development cs scp # Copy files between local and sandbox cs rsync # Run rsync between local and sandbox cs mutagen # Run two-way sync between local and sandbox ``` The CLI tool also provides management features for templates and sandboxes, as well as many other convenience or information features. Some are more for advanced usage cases, which will be covered later in this document. We will elaborate on them in corresponding topics later. Read the full [Reference](https://docs.sandboxes.cloud/docs/command-line-tool). --- # User Guide This chapter is to guide a developer to use Crafting sandbox in their day-to-day development. We will walk through a list of common tasks and best practices to improve your productivity for day-to-day coding, testing, debugging, etc. For how to configure the Crafting platform for your team to effectively manage development environments, please see the [admin guide](https://docs.sandboxes.cloud/docs/admin-overview), which includes how to step templates, snapshots, testing data, PR integration, etc. Here is an outline: * [Login](https://docs.sandboxes.cloud/docs/login) * [Start a Workspace](https://docs.sandboxes.cloud/docs/start-a-workspace) * [Basic Steps](https://docs.sandboxes.cloud/docs/basic-steps) * [Launch a sandbox](https://docs.sandboxes.cloud/docs/launch-a-sandbox) * [Work on a sandbox](https://docs.sandboxes.cloud/docs/work-on-a-sandbox) * [Use command line tool](https://docs.sandboxes.cloud/docs/use-command-line-tool) * [Code with VS Code](https://docs.sandboxes.cloud/docs/code-with-vs-code) * [Code with JetBrains IDEs](https://docs.sandboxes.cloud/docs/code-with-jetbrains-ides) * [Suspend and resume](https://docs.sandboxes.cloud/docs/suspend-and-resume) * [Advanced Topics](https://docs.sandboxes.cloud/docs/advanced-topics) * [Port-forwarding for hybrid development](https://docs.sandboxes.cloud/docs/port-forwarding) * [Code sync for hybrid development](https://docs.sandboxes.cloud/docs/code-sync) * [Copy files between local and cloud](https://docs.sandboxes.cloud/docs/copy-files) * [Environment variables (ENV)](https://docs.sandboxes.cloud/docs/environment-variables) * [Save and load data snapshots](https://docs.sandboxes.cloud/docs/data-snapshots) * [Auto-follow code branch in sandbox](https://docs.sandboxes.cloud/docs/auto-follow) * [Access control in sandbox](https://docs.sandboxes.cloud/docs/access-control) * [Develop on Kubernetes](https://docs.sandboxes.cloud/docs/kubernetes-dev) * [Develop with cloud resources](https://docs.sandboxes.cloud/docs/cloud-resources-dev) * [Personalize your sandbox](https://docs.sandboxes.cloud/docs/personalize) --- # Work on a sandbox When the sandbox is launched and ready, we can start using the sandbox. In this page, we will use the simple multi-service demo app to cover following parts. * [Run previews](#run-previews) * [Inspect the workspaces](#inspect-the-workspaces) * [View logs generated from your services](#view-logs-generated-from-your-services) * [Use Web IDE to write code or run commands](#use-web-ide-to-write-code-or-run-commands) * [Rebuild workspace](#rebuild-workspace) For additional information, please see [Advanced Topics](https://docs.sandboxes.cloud/docs/advanced-topics) ## Run previews For previewing the code changes you have in the sandbox, you can access the sandbox via `endpoints`. These are the URLs exposed by the sandbox which gets routed to some backend services to run the entire product flow end-to-end. For example, in the demo app, we have two HTTPS endpoints, `app` and `api`, routed to the frontend service (with port 3000) and backend service (with port 3001), respectively. We can hit the `app` endpoint as shown below to run the product flow. After clicking the `app` endpoint, we can see it opens a new page with the sandbox URL `https://app--demo-demo-cloud.sandboxes.run/` and with the code version in the sandbox. Similarly, you can hit the API endpoint from your web frontend, mobile frontend, or command line tools like `curl` ## Inspect the workspaces You can see the information of each workspaces running in the sandbox by clicking into any workspace. For example, if we click into the backend workspace, we can see it has one repo checkout `demo-jobs-backend`, which is currently on `master` branch, and it's checked out to the path `backend`. It also has two daemon processes running, one open port (`api` port 3001) with the HTTP protocol, and installed with `ruby` version 2.7.2 ## View logs generated from your services From here, you can click the page icon to view the logs for specific services or daemon processes. For example, below is the log for the rails process. ## Use Web IDE to write code or run commands When you want to go into the workspace to write code or run commands, you can simply open the Web IDE by clicking any of the buttons highlighted below: For example, if we open the Web IDE on the frontend, we can see the following window, a VS Code web version opened with a terminal panel. ![Web IDE screenshot](https://files.readme.io/40fad3d-guide-web-ide.JPG) Here we can directly modify the code (e.g., change the title of the page to `Crafting Jobs.` to something else and it will take effect immediately and we can preview it with the endpoint URL. The terminal here is in the code checkout directory. Given it's a regular Git checkout, you can do git commands such as `git pull`, `git checkout`, `git commit`, `git push`, etc. directly from here. Or you can run any other command here that you want to execute on this dev container. ## Rebuild workspace In some cases you may want to wipe clean one of your workspaces (and dependencies or containers as well) to restart it from scratch. You can rebuild it by clicking the rebuild button highlighted below: Rebuilding the a workspace will clear all the local state. It will checkout the code fresh from the git repo, run the setup and build, and launch the service, all according to what's specified in the template. It basically gives you a fresh state of the workspace as such from a new sandbox. Rebuilding dependencies will remove all the current data and reset it to empty (or with the default data snapshot specified in the template) --- # Setup workspaces This page talks about steps to setup each dev container (`workspace`) to make it ready to code. Here we assume you already have a `Template Builder Sandbox`, which can be basically any sandbox that is not based on a template, also called [Standalone sandbox](https://docs.sandboxes.cloud/docs/standalone-sandbox). We recommend to open a Web IDE from for the workspace you want to setup here because some commands is best run with the code and terminal. From the editing view, clicking the workspace (e.g. `demo-jobs-backend`) will get the workspace view for editing its details Next, we will walk though the steps to setup one workspace. If your template includes multiple workspaces, please do the setup in each workspace following the steps. * [Add code checkouts](#add-code-checkouts) * [Install required system packages](#install-required-system-packages) * [Build and launch service and setup automation in repo manifest](#build-and-launch-service-and-setup-automation-in-repo-manifest) * [Setup environment variables](#setup-environment-variables) * [Persist packages and libraries setup with snapshots](#persist-packages-and-libraries-setup-with-snapshots) * [Initialization scripts for workspace setup](#initialization-scripts-for-workspace-setup) * [Add additional workspaces](#add-additional-workspaces) * [Test setup with new sandbox](#test-setup-with-new-sandbox) ## Add code checkouts Code checkouts are a fundamental part of the workspace. A meaningful workspace typically have one or more code checkouts. If you have selected git repos in the [Template builder wizard](https://docs.sandboxes.cloud/docs/template-builder) your workspace should already have the code checked out, as shown follows. If you didn't, or you want to add more code checkouts for the workspace, you can click the `+` button there to add another code checkout. All the source repos listed in the `Checkout` section here will be automatically checked out to the path specified (based from the home directly). ## Install required system packages Next is to make sure we have proper dev packages to support our dev environments. We can adjust the built-in packages here as well. In addition to the built-in packages, we can install any package needed here from the terminal in Web IDE (or ssh connection) using `sudo apt install` and persist it later (see [Persist packages and libraries setup with snapshots](#persist-packages-and-libraries-setup-with-snapshots)). Essentially, here we setup the workspaces interactively, just like install software on a Linux system. Alternatively, you can also use a custom container image you have as the base for your workspace, where you can make sure all your dev packages are in the right version. To setup, you can check the box for `Custom Image` in the `base` part from the `Snapshots` table, and input a container image URL, as shown below. ![Custom image setup](https://files.readme.io/1478cdf-image.png) There are some additional notes for using a custom image as base image for workspace: * The `base snapshot` functionality (which is used to persist your system libraries) is replaced by the container image, therefore you can not create base snapshot anymore for this workspace. * Some tool packages may not work if the container image contains incompatible system libraries. * The container image must be built with a few required packages installed. ## Build and launch service and setup automation in repo manifest Once we have necessary system packages installed, we want to make sure our code builds and runs in the workspace. Depending on the language and frameworks of your code, and the build tools you use, it could be `bazel build`, `gradlew`, `go build`, `bundle`, `yarn`, and so on. This process typically also involves downloading many libraries that your code depends on to your home directory, which we can persist it later (see [Persist packages and libraries setup with snapshots](#persist-packages-and-libraries-setup-with-snapshots)) If your service depends on a database, you may need to do database migrations and seeding first. You may need to adjust config files in your code to make sure it finds the database you have selected as dependency, which is running as a separate container. All dependencies have the hostnames as the dependencies' name. For more information regarding network within the sandbox, please see [Network configuration and endpoints](https://docs.sandboxes.cloud/docs/network-setup) Once the code is built and ready run, you can start the service just like you do in your local machine, using commands such as `java`, `rails server`, `npm run`, etc. Once we make sure the service can run properly, we can edit the `Repo Manifest` to persist it in the template. ![Repo manifest editor](https://files.readme.io/1bb9d45-image.png) The `Repo Manifest` defines how Crafting system automates the setup for a git repository in the workspace after checking out the code, so that it will automatically run the setup steps we did above. An example repo manifest is as follows. ```yaml env: - DB_ROOT_PASSWORD=mysql - RAILS_ENV=development hooks: post-checkout: cmd: | bundle exec rake db:migrate build: cmd: | bundle install daemons: rails: run: cmd: bundle exec rails s -p 3001 -b 0.0.0.0 ``` In the `Repo Manifest`, it defines what to run after every time a new version of code is checked out, how to build, and how to launch the service as a daemon process. See [Repo Manifest](https://docs.sandboxes.cloud/docs/repo-manifest) for details. To quickly iterate and test, you can edit the repo manifest here in the template editor directly. After it's set properly, we recommend you to commit it along with your code under `.sandbox/manifest.yaml` ## Setup environment variables A development environment often needs to set up environment variables (ENV) for the services to behave accordingly. Many settings are often stored in ENV and the service is commonly needing ENV to point to config files and other services. Here in the template, we can automate the set up for default environment variables. ![Environment variables setup](https://files.readme.io/2d1491c-image.png) As shown above, you can set environment variables here on two levels, sandbox-level or workspace-level. The sandbox-level ENVs will be applied to all workspaces in the sandbox, while the workspace-level ENVs will only be applied to this particular workspace. Just like we described in [Environment variables (ENV)](https://docs.sandboxes.cloud/docs/environment-variables): * There are built-in environment variables defined by Crafting * The new ENV definitions can expand from the existing definitions, and you can re-define the ENV to overwrite it. * There is a particular merge order for ENVs: built-in, then sandbox-level, then workspace level. At sandbox creation time, the user can extend the default definitions of ENVs in the template, but it's still a good practice to define a good set of default ENVs in the template to achieve a one-click experience for users. ## Persist packages and libraries setup with snapshots To persist the setup of workspace's file system, including the installed system packages or libraries fetched during build, Crafting features a `Snapshot System` to save file modifications for reuse later. Two types of snapshots are used to save files for quickly set up a sandbox later from the template. The `Base Snapshot` captures the root filesystem changes except `/home/...`. We can create a base snapshot using CLI running in the target workspace, ```shell cs snapshot create NAME ``` If a workspace doesn't currently have a base or home snapshot, the newly created base or home snapshot will be applied to the `Template Builder Sandbox` as default snapshot automatically. ### Home snapshots The `Home Snapshot` is used to persist files under the home directory, typically including the cache for libraries and packages installed on user level (not system level). Usually, we may consider to capture the following in a home snapshot: * Configurations (`~/.config`). * Cache (`~/.cache`, to speed up building code). * Environment (`~/.bashrc`, `~/.bash_logout`, `~/.profile` etc). * Locally installed software (`~/.local`, and other folders based on the software). * VS Code extensions (`~/.vscode-remote/extensions`, `~/.vscode-server/extensions`). And the following should NOT be included in a Home Snapshot, as they may contain user-specific information or sensitive information: * Source code (as it varies every time when it's checked out and doesn't make sense to put into a home directory). * Credentials, private keys (e.g. `~/.ssh`, and some config folders under `~/.config`). To create a home snapshot, we need to first create `includes.txt` (and optionally `excludes.txt` files) in the workspace: * `~/.snapshot/includes.txt`: list of patterns of folders/files to be included, handled by `-T` option of `tar` command. * `~/.snapshot/excludes.txt`: list of patterns of folders/files to be excluded, handled by `-X` option of `tar` command. The final list of files is collected from `~/.snapshot/includes.txt` and subtracted from those matching `~/.snapshot/excludes.txt`. An example of a commonly used inclusion list is: ```text .bashrc .bash_logout .profile .config .local .cache .vscode-server/extensions .vscode-remote/extensions .vscode-remote/data/User/extensions.json ``` > Use relative paths only > > The path/patterns in the `~/.snapshot/includes.txt` and `~/.snapshot/excludes.txt` must be the relative paths from the home directory (`/home/owner`). Special folders like `.` or `..` are not allowed. Especially in the `~/.snapshot/includes.txt` file, all paths must exist, otherwise the snapshot process may fail. > > Some common mistakes will be entries like: > > * `~/.config`, `~/.vscode-server/extension` etc > * `./.local`, `../some-folder` etc > > The extensions.json file > > Old VSCode version doesn't use `data/User/extensions.json` and load extensions directly from the `extensions` folder. The more recent versions require `data/User/extensions.json` in addition to the `extensions` folder. If your local VSCode doesn't load any extensions from a workspace with a home snapshot already containing extensions, please add the following to the inclusion list before creating the home snapshot: > > ```text > .vscode-server/data/User/extensions.json > ``` Mostly, `~/.snapshot/excludes.txt` is not needed unless in the following cases: * Some sub-directories (and/or files) of an included directory must be excluded. * The inclusion pattern is complicated or difficult to write. Then add the top-level directory in `~/.snapshot/includes.txt`, and add subfolders, specific files, etc. to be excluded in `~/.snapshot/excludes.txt`. For instance, a tool stores credentials in `~/.config/SomeTool/credentials`, which must be excluded. As in the above example, `.config` is already in the inclusion list; therefore using `~/.snapshot/excludes.txt` will be simpler to express the exclusion rule: ```text .config/SomeTool/credentials ``` After the configuration files are ready, take a home snapshot by running CLI from the terminal in the workspace: ```shell cs snapshot create --home NAME ``` Note that the Home Snapshot represents a team default setup of the home directory in the workspace, which can be extended by each developer by their `Personal Snapshot`, following a similar pattern for creation. (see [Personalize your sandbox](https://docs.sandboxes.cloud/docs/personalize)) ### Trade-offs between home snapshot and automation scripts The home snapshot is a great way to speed up launches of new sandboxes from templates. Sometimes to properly setup a workspace, a large number of packages need to be downloaded from Internet and installed using the automation hooks in repo manifest or workspace level setup scripts (described [below](#initialization-scripts-for-workspace-setup)). Especially with large codebase for a sophisticated product, the setup time can be very long. With snapshots, the setup time can often by greatly shortened, sometimes trimming down 90% of the setup time. On the other hand, it's often impractical to update the snapshot every time something changes. So it's common practice to still run the full setup script which will take advantage of the cache if the cache is present. Like any caching system, snapshot does require some maintenance. If things are changed too much, the cache becomes less useful and you will need to retake the snapshot to include new packages. And as snapshot grows, it could act unexpectedly if content becomes stale. For example, the snapshot may still contain the old version of the library while the setup script tries to install a new version of library, causing confusions. So our recommendation is that: * If the setup script can finish very quickly, just rely on the script to setup the environment. * If the setup script runs for a long time, do use home snapshots but periodically (e.g., quarterly) refresh it. * After major code cleanup or restructure, refresh the snapshot from a clean state to clear the unused content. The same trade-off also applies to the case between data snapshot and seeding scripts. ## Initialization scripts for workspace setup Sometimes, in addition to restoring file changes from snapshots, we want certain custom setup steps to be run at workspace creation time, for example, connect VPN. Crafting supports two types of scripts in a workspace, which will run when it starts or resumes form suspension. * `/etc/sandbox.d/setup`: when it's present with *exec* permission, it runs as *root* after all home snapshots (shared home snapshot and personal home snapshot) are applied; * `~/.sandbox/setup`: when it's present with *exec* permission, it runs after `/etc/sandbox.d/setup`, under the identity of the workspace owner user. These scripts are used for dynamic customization, mostly updates configurations based on the current sandbox information. Installing software should be done interactively and saved in a base snapshot to make the workspace launch faster. The script `/etc/sandbox.d/setup` can be included in a base snapshot and used for system-level customization. And script `~/.sandbox/setup` can be included in a home snapshot for customization per-user. It can further load other scripts optionally which can be included in a personal snapshot. For example: ```shell #!/bin/bash # ... Do something ... if [[ -f ~/.sandbox/personalize.sh ]]; then . ~/.sandbox/personalize.sh fi ``` Then put this in a home snapshot and `~/.sandbox/personalize.sh` in a personal snapshot: ```text .sandbox/setup ``` ```text .sandbox/personalize.sh ``` > SSH Host Key Verification failure > > The startup scripts run before any git operation performed by Crafting. If some `git` commands are used in these scripts with SSH protocol (e.g. `git@github.com:...`), you will see errors like: > > ```text > Host key verification failed. > ``` > > The reason behind that is: when the script runs, there's no `~/.ssh/known_hosts`, without interactively asking for confirmation, SSH will automatically reject any unknown host keys. > > The solution is to add `export GIT_SSH_COMMAND='ssh -o StrictHostKeyChecking=no'` in the scripts before running any `git` commands. ## Daemons in workspace Background processes can be defined in a workspace to be launched after startup and kept running. There are 2 places to define daemons: * In the Template, see [Workspace System](https://docs.sandboxes.cloud/docs/sandbox-definition#workspace-system), for example: ```yaml workspaces: - name: example ... system: daemons: - name: foo run: cmd: ./foo bar dir: /opt/foo env: - FOO=BAR ``` * On the filesystem `/etc/sandbox.d/daemons` and baked in the base snapshot, for example add the file `/etc/sandbox.d/daemons/foo.yaml`: ```yaml name: foo run: cmd: ./foo bar dir: /opt/foo env: - FOO=BAR ``` ### Add additional workspaces Crafting supports multiple workspaces running in a sandbox for running multiple services. To add a new workspace, simply click `Add Component` from the editing view, as highlighted below, and select `Workspaces` in the dialog. After adding the new workspace, just repeat the steps above to set it up, until all workspaces have been added and set up. During the process you may want to add other components like dependencies or containers, please see the following pages (e.g. [Setup containers and dependencies](https://docs.sandboxes.cloud/docs/containers-dependencies-setup)) for details. ## Test setup with new sandbox Lastly, to test the setup for the workspace and make sure it works properly, we recommend to create a new sandbox to test by clicking `Try with new sandbox` highlighted below. ![Test with new sandbox](https://files.readme.io/9ff90c2-image.png) If everything is automatically setup for the workspace in the new sandbox, we are done here. If not, we can adjust some settings, retake snapshots, etc., and then test it again.