# Planetscale > When you set up your PlanetScale account, you're asked to create an **Organization**. --- # Source: https://planetscale.com/docs/security/access-control.md # Access control ## Organization access control When you set up your PlanetScale account, you're asked to create an **Organization**. An organization is essentially a container for your databases, settings, and members. You can create multiple organizations in the same account for different applications or use cases. Within each organization, you can add members and assign them different roles. This document covers the different roles, the ways you can assign roles, permissions associated with those roles. ## Roles and permissions We currently support three different roles in your organization: * `Organization Administrator` * `Organization Member` * `Database Administrator` ### Organization Administrator An `Organization Administrator` can perform all actions in an organization, as well as all actions on *every* database within that organization. ### Organization Member An `Organization Member` can only perform limited actions within an organization and on all databases in that organization. By default, all users added to an organization have this role. ### Database Administrator A `Database Administrator` can perform all actions on the database for which they were assigned the `Databases Administrator` role. This role is assigned at the **database level** and gives elevated permissions for the particular database that an organization member is the `Database Administrator` of. If you want to [grant a member *full* access to manage one or several databases](#assign-roles-at-a-database-level) but not full `Organization Administrator` access, then this is the role you want. Please note, a user that is granted this role must be a member of the organization of which the database exists in, so they will have the permissions associated with `Organization Member` as well. ## Organization-level permissions Each role has a set of permissions assigned to it, which determines what actions that role is allowed to take within an organization or database. The following table describes permissions assigned at the organization level for `Organization Administrators` and `Organization Members`. Because `Database Administrators` don't have any organization-level permissions, they are not included in this table. | Action | Description | Member | Administrator | | -------------------------------------- | ------------------------------------------------------------------------------ | ---------------------------------- | ---------------------------------- | | View branches | View a database branch | | | | Create branches | Create a database branch | | | | Delete non-production branches | Delete a non-production database branch | | | | View databases | View one or all databases | | | | Create databases | Create a new database | | | | Create deploy requests | Create a deploy request for a branch | | | | Manage service tokens | Create, view, or delete service tokens | | | | Manage service token grants | Create, view, update, or delete service token grants | | | | View organization members | View one or all organization members | | | | View database members | View one or all database members | | | | View organization | View an organization | | | | View query statistics | View query statistics for an organization's databases | | | | Connect to development branches | Create passwords or use pscale shell for development branches | | | | Connect to production branches | Create passwords or use pscale shell for production branches | | | | Delete production branches | Delete a production database branch | | | | Promote branches | Promote a branch to production | | | | Modify VSchema (Vitess only) | Edit the VSchema of a keyspace | | | | Manage databases | Delete, update settings, or import a database | | | | Manage beta features | Opt-in or opt-out of a beta feature | | | | Create production service token grants | Create a service token grant to connect or delete a production database branch | | | | Update an integration | Update a third-party integration | | | | Manage invitations | View, create, or cancel organization invitations | | | | Manage invoices | View or download organization invoices | | | | Manage billing | View or update billing plans and payment methods | | | | View audit logs | View all audit logs | | | | Manage organization members | Update member roles or delete organization members | | | | Manage database members | Update member roles, add, or remove database members | | | | Manage organization | Update organization settings, SSO, or delete organization | | | ## Database-level permissions The following table describes the permissions assigned at the **database level** for `Organization Administrators`, `Organization Members`, and `Database Administrators`. For `Organization Administrators` and `Organization Members`, these permissions apply to every database in the organization. Because the `Database Administrator` role is assigned at the database level, the permissions are for the specific database(s) for which they have the `Database Administrator` role. | Action | Description | Member | Administrator | | ------------------------------- | ------------------------------------------------------------- | ---------------------------------- | ---------------------------------- | | Create and view branches | Create or view a database branch | | | | Delete non-production branches | Delete a non-production branch of a specific database | | | | View database | View a database in an organization | | | | Create deploy requests | Create a deploy request for a branch on a specific database | | | | View database members | View one or all database members | | | | View query statistics | View query statistics for an organization's databases | | | | Restore non-production backups | Restore the backup of a development branch | | | | Connect to development branches | Create passwords or use pscale shell for development branches | | | | Connect to production branches | Create passwords or use pscale shell for production branches | | | | Manage billing | Update the billing plan of a specific database | | | | Delete production branches | Delete a production database branch of a specific database | | | | Promote branches | Promote a branch of a specific database to production | | | | Manage database | Delete, update settings, or import a database | | | | Manage beta features | Opt-in or opt-out of a beta feature for a database | | | | Manage database members | Update database member roles, add, or remove database members | | | | Restore production backups | Restore the backup of a production branch | | | An organization may have several databases, and an `Organization Member` may have different access to each database depending on whether or not they also have the `Database Administrator` role. ## Assign organization roles to members You can follow the steps below to assign roles to your members. You must be an Organization Administrator to modify member roles. * In the [PlanetScale dashboard](https://app.planetscale.com), click on the Settings tab in the top navigation. * Click on "Members" in the sidebar on the left. * From here, you can click on the dropdown on the right under the "Role" column to select the role you want to apply to each member. You can also invite new members to your organization and assign roles once they accept their invitation. New members will be added with the [`Organization Member`](#organization-member) role by default. Member role management is issued at the organization level. Each organization in your account may have different members with different access levels. ## Assign roles at a database level There are two ways to assign database-level roles to Organization members: 1. Individually using the `Database Administrator` role. 2. Creating a Team, adding member(s), and adding database(s) to that team. ### Individually assign the `Database Administrator` role To assign a member the role of `Database Administrator`, follow the steps outlined below. You must be an Organization Administrator or an existing Database Administrator to manage the `Database Administrator` role. Members that create a database are automatically assigned the role of `Database Administrator` for that database. In the [PlanetScale dashboard](https://app.planetscale.com), click on the name of the database you want to add a Database Administrator to. Click on the "**Settings**" tab in the top navigation. Click on "**Administrators**" in the sidebar on the left. To add an administrator, click on the "**Add administrator**" button and select the member you wish to add as a Database Administrator. From here, you can also remove a Database Administrator by clicking the "**Remove**" button next to their name. ### Add Database Administrator role via Teams If you wish to give several members the Database Administrator role, you may want to [create a Team](/docs/security/teams#create-and-manage-teams). This will allow you to manage the access to that database all in one place. For instructions, see our [Teams documentation](/docs/security/teams). ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/security/account-password-security.md # Account password security > In addition to best practices like [multi-factor authentication](/docs/security/multi-factor-authentication), PlanetScale securely stores your account passwords and validates passwords against known security breaches. ## Password storage PlanetScale uses [Argon2](https://en.wikipedia.org/wiki/Argon2) as the password hashing function. We use the `Argon2id` variant, which provides protection against side channel attacks and GPU-based cracking attacks. A password hashing function is a one-way function which means that your password cannot be reversed or decrypted from the stored value in the database. ## Leaked passwords PlanetScale checks passwords when a user sets them during signup or when changing the password. The first check is that the password needs to have enough entropy. Entropy is a measure for the amount of randomness a password contains. Read more about how we use entropy for [user-friendly strong passwords in the PlanetScale blog](https://planetscale.com/blog/using-entropy-for-user-friendly-strong-passwords). PlanetScale also checks the password against [Have I been pwned](https://haveibeenpwned.com). *Have I been pwned* is a large database containing passwords seen in security breaches. PlanetScale does **not** send the password you enter to *Have I been pwned*. The *Have I been pwned* API provides anonymity through \[the Cloudflare k-anonymity implementation]\(https:([https://planetscale.com/blog.cloudflare.com/validating-leaked-passwords-with-k-anonymity/](https://planetscale.com/blog.cloudflare.com/validating-leaked-passwords-with-k-anonymity/)). This ensures that no other provider can identify the password that you have entered. The password is also not associated in any way with the email address you use to sign up. This information is not shared with *Have I been pwned*, nor is this information needed for the leaked passwords API they provide. ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/vitess/schema-changes/aggressive-cutover.md # Aggressive deploy request cutover ## Overview Cutover is the critical final step in an online schema migration where Vitess atomically replaces the original table with a newly created "shadow" table that contains the updated schema. This process involves acquiring metadata locks, preventing writes to the original table, ensuring complete data synchronization, and renaming tables to complete the migration. The cutover process can fail or time out when the table is locked by long-running queries or transactions, preventing Vitess from acquiring the necessary metadata locks. When this happens, Vitess will retry the cutover operation until it succeeds. Aggressive cutover is a setting that forces the cutover to complete immediately by killing any queries or transactions that are blocking the operation. When enabled, the system will prioritize schema migration completion over preserving running queries. ## When to enable aggressive cutover You should consider enabling aggressive cutover in these scenarios: 1. **Migration delayed due to long-running transactions**: If you receive the "Migration delayed due to long-running transactions" notice on your deploy requests, this indicates that the cutover cannot complete because there are long-running transactions on the table. Enabling aggressive cutover will force the cutover to happen by killing those blocking queries. 2. **Application has slow queries or long-running transactions**: If your application consistently runs slow queries or long-running transactions that prevent migrations from completing, aggressive cutover may be necessary. This setting is disabled by default because well-optimized applications should not require it. ## How it works Aggressive deploy request cutover can be enabled for a database by admins only. To enable the setting, visit the database settings page and look under "Advanced settings". When aggressive cutover is enabled, the system immediately begins killing queries and transactions that are using or locking the migrated table on the very first cutover attempt. This aggressive approach ensures the migration completes without waiting for blocking operations to finish naturally. **Normal cutover behavior:** * Vitess attempts to acquire locks on the table * If blocked by ongoing queries/transactions, it waits and retries * This process continues until the cutover succeeds or 1 hour elapses * After 1 hour the cutover will be forced **Aggressive cutover behavior:** * Vitess immediately kills any queries or transactions blocking the cutover * The cutover proceeds without waiting for blocking operations to complete **Important considerations:** * Having retry logic or a strategy to handle re-running the killed queries is advised * Once enabled, this setting applies to all future deploy requests ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/vitess/integrations/airbyte.md # Airbyte integration > With PlanetScale Connect, you can extract data from your PlanetScale database and safely load it into other destinations for analysis, transformation, and more. We implemented an [Airbyte](https://airbyte.com/) connector as the pipeline between your PlanetScale source and selected destination. This document will walk you through how to connect your PlanetScale database to Airbyte. ## Connect to Airbyte Only [Airbyte Open Source](https://docs.airbyte.com/quickstart/deploy-airbyte) supports the PlanetScale data source. In this section, you'll learn how to set up Airbyte and connect your PlanetScale source. ### Requirements * A PlanetScale database * [Docker Desktop](https://www.docker.com/products/docker-desktop/) (Docker terms apply) ### Set up Airbyte locally Install [Docker Desktop](https://www.docker.com/products/docker-desktop/). Follow the related installation instructions included within the [Airbyte Quickstart Documentation](https://docs.airbyte.com/using-airbyte/getting-started/oss-quickstart). Open Airbyte in the browser at [http://localhost:8000](http://localhost:8000). ### Set up PlanetScale source Now that Airbyte is running locally, let's set up the custom PlanetScale source. In the Airbyte dashboard, click "**Settings**" on the bottom left. Click "**Sources**" on the left sidebar. Click the "**New connector**" button. Click the "**Add a new Docker connector**" option. Fill in the connector values as follows: * **Connector display name**: PlanetScale * **Docker repository name**: planetscale/airbyte-source * **Docker image tag**: `latest` * **Connector Documentation URL**:(/docs/vitess/integrations/airbyte You can find the [PlanetScale Airbyte Source Dockerhub release page here](https://hub.docker.com/r/planetscale/airbyte-source). Airbyte new PlanetScale connector ### Fill in PlanetScale connection information You're now ready to connect your PlanetScale database to Airbyte. Click on the database and branch you want to connect to. Click "**Connect**", select "**General**" from the "**Connect with**" dropdown. Leave this tab open, as you'll need to copy these credentials shortly. Back in Airbyte, click "**Sources**" in the main left sidebar > "**New source**". Select the new PlanetScale source you created from the dropdown. Fill in the "**Set up the source**" values as follows: * **Name**: Any name of your choice * **Source type**: Select "PlanetScale" * **Host**: Paste in the copied value for `host` * **Database**: Paste in the copied value for `database` * **Username**: Paste in the copied value for `username` * **Password**: Paste in the copied value for `password` Airbyte - PlanetScale source setup You can also provide some optional values: * **Replicas**: Select whether or not you want to collect data from replica nodes. * **Shards**: Map your shards. * **Starting GTIDs**: Start replication from a specific GTID per keyspace shard. Airbyte - PlanetScale optional setup You can see the [PlanetScale airbyte-source README](https://github.com/planetscale/airbyte-source/blob/main/README.md) for more details on these options. Click "**Set up source**" to connect. You should get a success message that the connection test passed. ### Choose your destination With the connection complete, you can now choose your destination. Click "**Destinations**" in the sidebar or the "**New destination**" button on the source connection page. Set up the destination you want to sync your data to. Each destination should have a Setup Guide linked on its destination setup page. ### Configure a connection Now to get the connection fully set up. Click on "Connections" on the left side bar. If you have not yet set up any connectors, you should see this: Airbyte - New connection Click the button to set up a connection. Otherwise, click "**New Connection**" in the top right corner. From here, follow these steps: On the "**Define source**" page, choose your PlanetScale source as the **source**. Airbyte - Source On the "**Define destination**" page, select the **destination** you want to sync your PlanetScale data to. For this demo, we are using a CSV destination. Airbyte - Source On the "**Select streams**" page, select a sync mode. Airbyte - Source Also on this page, you will need to select the specific tables and columns you want to sync. For each, choose what type of sync mode you'd like to use for each source table. Airbyte - Sync * **Incremental** — Incremental sync pulls *only* the data that has been modified/added since the last sync. We use [Vitess VStream](https://vitess.io/docs/concepts/vstream/) to track the stopping point of the previous sync and only pull any changes since then. * **Full refresh** — Full refresh pulls *all* data at every scheduled sync frequency. On the "**Configure connection**" page, choose a sync frequency, which is how often we will connect to your PlanetScale database to download data. Airbyte - Connection Click "**Finish and sync**". Everything is now configured to pull your PlanetScale data into Airbyte and sync it to the selected destination on the schedule you chose. To run the connection, click "**Connections**" > "**Launch**". ## Handling schema changes Airbyte will not automatically detect when you make schema changes to your PlanetScale database. If you drop a column, your sync should throw an error as it looks for a column that doesn't exist. However, if you add a column, the sync will continue without any errors. Airbyte will be unaware of the new column altogether. This is known as schema drift. Whenever you perform a schema change, you need to notify Airbyte of it: In the Airbyte dashboard, click "**Connections**", select the connection, then navigate to the "**Schema**" tab. Click "**Refresh source schema**". Click "**Save changes**". Keep in mind, this might delete all data for the connection and start a new sync from scratch. ## Stopping Airbyte At any point, you can disable any incremental or full syncs by going to the 'Connection' settings page and clicking 'Delete this connection'. This will not touch any of the source or destination data, but will prevent Airbyte from doing any further operations. Airbyte - PlanetScale disconnection ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/vitess/imports/amazon-aurora-migration-guide.md # Amazon Aurora migration guide ## Overview This document will demonstrate how to migrate a database from Amazon Aurora (MySQL compatible) to PlanetScale. This guide assumes you are using Amazon Aurora (MySQL compatible) on RDS. If you are using MySQL on Amazon RDS, follow the [Amazon RDS for MySQL migration guide](/docs/vitess/imports/aws-rds-migration-guide). Other database systems (non-MySQL or MariaDB databases) available through RDS will not work with the PlanetScale import tool. We recommend reading through the [Database import documentation](/docs/vitess/imports/database-imports) to learn how our import tool works before proceeding. ## Prerequisites Gather the following information from the AWS Console: * **Database cluster endpoint address** - Located in "**Connectivity & security**" tab (use the regional cluster endpoint, not reader or writer instances) * **Port number** - Typically 3306 * **Master username and password** - Your Aurora root credentials The Connectivity & security tab of the database in RDS. ## Step 1: Configure server settings Your Aurora database needs specific server settings configured before you can import. Follow these steps to configure GTID mode, binlog format, and sql\_mode. ### Check your current parameter group Your Amazon Aurora database is either using the default DB cluster parameter group (e.g., default.aurora-mysql8.0) or a custom one. You can view it in the "**Configuration**" tab of your regional database cluster (not reader or writer instances). The Configuration tab of the database view in RDS. ### Configure the parameter group If you are using the default DB cluster parameter group, you'll need to create a new parameter group to reconfigure settings. To create a parameter group, select "**Parameter groups**" from the left nav and then "**Create parameter group**". The Parameter groups view in RDS. Specify the **Parameter group family**, **Type**, **Group name**, and **Description**. All fields are required. * Parameter group family: aurora-mysql8.0 * Type: DB Cluster Parameter Group (Note: Not "DB Parameter Group" type) * Group name: psmigrationgroup (or your choice) * Description: Parameter group for PlanetScale migration You'll be brought back to the list of available parameter groups when you save. Edit the settings in your custom DB cluster parameter group. Select your parameter group from the list. Click "**Edit parameters**" to unlock editing. The header of the view when editing a parameter group. Search for "**gtid**" and update: * gtid-mode: ON * enforce\_gtid\_consistency: ON Search for "**sql\_mode**" and update: * sql\_mode: NO\_ZERO\_IN\_DATE,NO\_ZERO\_DATE,ONLY\_FULL\_GROUP\_BY Search for "**binlog\_format**" and update: * binlog\_format: ROW Click "**Save changes**". Associate the DB cluster parameter group to your database. Select "**Databases**" from the left nav, select your regional cluster (not writer or reader instance), and click "**Modify**". Scroll to **Additional configuration** section. Update the **DB cluster parameter group** to your new parameter group. Click "**Continue**". The Additional configuration section of the database configuration view. Choose when to apply: * **Apply during the next scheduled maintenance window** - Applied during maintenance window * **Apply immediately** - Applied now, but requires manual reboot Click "**Modify DB instance**". Reboot your database's writer instance to apply the settings. Click "**Actions**" > "**Reboot**". (Make sure you're selecting the writer instance, not the regional cluster.) This will briefly disconnect active users! The parameter group changes won't take effect without a reboot. Confirm the reboot. You can check the status in the databases list (click refresh to update). ## Step 2: Enable binary logging Binary logging must be enabled for the import to work. On Aurora/RDS, binary logging is tied to automated backups. To enable binary logging, [enable automated backups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.Enabling) by setting the backup retention period to any value greater than zero days. Verify binary logging is enabled: ```sql theme={null} mysql> show variables like 'log_bin'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_bin | ON | +---------------+-------+ ``` ## Step 3: Configure binlog retention Set the binary log retention period to ensure logs aren't purged during the import. For most cases, 48 hours is sufficient, but larger imports may need more time. Longer retention periods use more disk space. Evaluate your binlog size to avoid running out of disk space. Contact [PlanetScale Support](https://support.planetscale.com/hc/en-us) if you need assistance. Set the retention period using the `mysql.rds_set_configuration()` procedure: ```sql theme={null} CALL mysql.rds_set_configuration('binlog retention hours', 48); ``` Verify the setting: ```sql theme={null} CALL mysql.rds_show_configuration; ``` Expected output: ``` +------------------------+-------+-----------------------------------------------------------------------------------------------------------+ | name | value | description | +------------------------+-------+-----------------------------------------------------------------------------------------------------------+ | binlog retention hours | 48 | binlog retention hours specifies the duration in hours before binary logs are automatically deleted. | +------------------------+-------+-----------------------------------------------------------------------------------------------------------+ ``` ## Step 4: Ensure database is publicly accessible PlanetScale needs to connect to your Aurora database over the internet. Check that your database is publicly accessible. In the writer instance, go to "**Connectivity & security**" tab. Under "**Security**", check if **Publicly accessible** is set to "Yes". If it says "No", you'll need to modify the database settings to enable public access. If you cannot make the database publicly accessible, [contact us](https://planetscale.com/contact) to discuss alternative import options. ## Step 5: Create a migration user Create a dedicated user with limited privileges for the import process. Connect to your Aurora database using the MySQL command line with your master credentials: ```bash theme={null} mysql -u admin -p -h [your-aurora-endpoint] ``` Run the following script, replacing the placeholders: * `` - Password for the migration\_user account * `` - Name of the database you're importing ```sql theme={null} CREATE USER 'migration_user'@'%' IDENTIFIED BY ''; GRANT PROCESS, REPLICATION SLAVE, REPLICATION CLIENT, RELOAD ON *.* TO 'migration_user'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, SHOW VIEW, LOCK TABLES ON ``.* TO 'migration_user'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER ON `ps\_import\_%`.* TO 'migration_user'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER ON `_vt`.* TO 'migration_user'@'%'; GRANT EXECUTE ON PROCEDURE mysql.rds_show_configuration TO 'migration_user'@'%'; GRANT SELECT ON mysql.db TO 'migration_user'@'%'; GRANT SELECT ON mysql.func TO 'migration_user'@'%'; GRANT SELECT ON mysql.tables_priv TO 'migration_user'@'%'; GRANT SELECT ON mysql.user TO 'migration_user'@'%'; GRANT SELECT ON performance_schema.* TO 'migration_user'@'%'; FLUSH PRIVILEGES; ``` Save the username and password securely - you'll need them for the import. ## Step 6: Configure RDS security group Allow PlanetScale to connect by adding PlanetScale's IP addresses to your security group. The specific IP addresses depend on your PlanetScale database region. These will be shown during the import workflow on the **Connect to external database** step. See the [Import public IP addresses](/docs/vitess/imports/import-tool-migration-addresses) page for more details. ### Add IP addresses to security group 1. Navigate to "**Connectivity & security**" tab of your writer instance 2. Click the VPC security group link The Connectivity & security tab of the database view in RDS. 3. Select "**Inbound rules**" tab, then "**Edit inbound rules**" The view of security groups associated with the RDS instance. 4. Click "**Add rule**" 5. **Type**: Select `MYSQL/Aurora` 6. **Source**: Enter the first PlanetScale IP address (AWS will format it as `x.x.x.x/32`) 7. Repeat for each IP address in your region 8. Click "**Save rules**" The Edit inbound rules view where source traffic can be allowed. ## Importing your database Now that your Aurora database is configured, follow the [Database Imports guide](/docs/vitess/imports/database-imports) to complete your import. When filling out the connection form in the import workflow, use: * **Host name** - Your Aurora cluster endpoint address (from Prerequisites) * **Port** - 3306 (or your custom port) * **Database name** - The exact database name to import * **Username** - `migration_user` * **Password** - The password you set in Step 5 * **SSL verification mode** - Select based on your Aurora SSL configuration The Database Imports guide will walk you through: * Creating your PlanetScale database * Connecting to your Aurora database * Validating your configuration * Selecting tables to import * Monitoring the import progress * Switching traffic and completing the import ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/vitess/monitoring/anomalies.md # Source: https://planetscale.com/docs/postgres/monitoring/anomalies.md # Anomalies > Anomalies are defined as periods with a substantially elevated percentage of slow-running queries. ## Overview PlanetScale Insights continuously analyzes your query performance to establish a baseline for expected performance. When a high enough percentage of queries are running more slowly than the baseline expectation, we call this an anomaly. ## Using the Anomalies graph The graph shown under the Anomalies tab shows the percentage of queries executing slower than the 97.7th (2-sigma) percentile baseline on the y-axis and the period of time on the x-axis. The "expected" line shows the percent of queries that are statistically expected in a database with uniform query performance over time. Slight deviations from the expected value are normal. Only substantial and sustained deviations from the expected value are considered an anomaly. Database health graph showing two anomalies Any periods where your database was unhealthy will be highlighted with a red icon representing a performance anomaly. Each anomaly on the graph is clickable. Clicking on it will pull up more details about it in the table below the graph, such as: duration, percentage of increase, and when the anomaly occurred. We also overlay any deploy requests that happened during that period over the anomaly graph. On top of this, we also surface any impact to the following: * The query that triggered the anomaly * CPU utilization * Memory * IOPS * Queries per second * Rows written per second * Rows read per second * Errors per second ## Anomalies vs query latency You may notice a correlation between some areas in the query latency graph and the anomalies graph. Conversely, in some cases, you may see a spike in query latency, but no corresponding anomaly. Increased query latency *can* be indicative of an anomaly, but not always. Query latency may increase and decrease in ways that don't always indicate an actual problem with your database. For example, you may run a weekly report that consists of a few slow-running queries. These queries are always slow. Every week, you'll see a spike on your query latency graph during the time that your weekly report is generated, but not on your anomaly violations graph. The queries are running at their *expected* latency, so this is not considered an anomaly. ## What should I do if my database has an anomaly? The purpose of the Anomalies tab is to show you relevant information so you can determine what caused an anomaly and correct the issue. Let's look at an example scenario. You deploy a feature in your application that contains a new query. This query is slow, running frequently, and is hogging database resources. This new slow query is running so often that it's slowing down the rest of your database. Because your other queries are now running slower than expected, an anomaly is triggered. In this case, we will surface the new slow-running query so that you can find ways to optimize it to free up some of the resources it's using. Adding an index will often solve the problem. You can test this by adding the index, creating a deploy request, and deploying it. If it's successful, you'll quickly see the anomaly end. On the other hand, an anomaly does not necessarily mean you need to take any action. One common example where you may see an anomaly is in the case of large active-running backups. In this case, we will tell you that a backup was running during the time of the anomaly. Even if it causes an anomaly, we do not recommend you turn off backups to prevent possible data loss. ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/vitess/architecture.md # PlanetScale Vitess architecture > PlanetScale's Vitess product is designed for reliability, scalability, and developer productivity. ## Overview We achieve these goals through a combination of [MySQL](/docs/vitess/terminology#mysql), [Vitess](/docs/vitess/terminology#vitess), and our own application and ecosystem we have built atop these open-source technologies. There is a great deal of infrastructure that enables our databases to be highly-available, secure, and resilient. In this article, you'll learn about what powers PlanetScale databases and how you can view your database's configuration on our app. ## The infrastructure diagram After creating a PlanetScale account and joining at least one organization, you can create a database. Each new database has a single default [keyspace](/docs/vitess/terminology#keyspace) — a logical database — with the same name as the database. On the dashboard of every PlanetScale database is a diagram outlining the infrastructure that powers the database. Architecture diagram for a PlanetScale database Architecture diagram for a PlanetScale database By default, the architecture diagram will show the architecture for the keyspace corresponding to your default branch. Here's how you can tell what keyspace and branch you are viewing the diagram of: Architecture diagram for a PlanetScale database Architecture diagram for a PlanetScale database ### Production branches Production branches are designed for production workloads, and as such are given enough resources to ensure high availability. By default, every production branch has a single primary MySQL instance and two replicas. Each primary also comes with 3 [VTGates](/docs/vitess/terminology#vtgate) across 3 availability zones, which act as proxies for your MySQL instances. These are all pictured in the diagram for a production branch: Production branch architecture Production branch architecture Generally, the application connecting to this database need not be aware of these various components. One exception to this is if you are specifically trying to [send queries to a replica](/docs/vitess/scaling/replicas#how-to-query-replicas). ### Development branches Development branches are specced to enable the development and testing of new features and are not designed for production workloads. When a new development branch is created, a single MySQL node is created along with a VTGate that handles connections to that node. This is reflected in the diagram of a development branch. Development branch architecture Development branch architecture When you promote a development branch to production status, PlanetScale automatically adds additional replicas and VTGates deployed across multiple availability zones in a given region. ### Read-only regions The primary of your database is the only node that can accept writes, and it resides in a single region. You can add [read-only regions](/docs/vitess/scaling/read-only-regions) to a branch which adds replicas in another region and can be used to serve read traffic. This can help reduce read latency for application servers that are distributed around the world. Below, you can see our database has the primary and two replicas in `us-east-2` with read-only replicas added in both `us-west-2` and `eu-central-1`. Production branch with read-only regions architecture Production branch with read-only regions architecture The read-only replicas can be identified by the blue globe icon. ## Infrastructure metrics Each element within the infrastructure diagram for PlanetScale database branches can be selected to display additional metrics related to that element. These metrics are displayed in expandable cards that present themselves when an element is selected. By default, the cards display metrics from the last 6 hours but can be adjusted if additional data is needed. ### VTGates The VTGate node displays the total number of VTGates that exist for a given branch, as well as the number of availability zones in which they live. Selecting the VTGates node will show the following metrics: * Number of connections. * Latency. * Queries received. * CPU. * Memory consumption. VTGate metrics VTGate metrics ### MySQL nodes Each MySQL node in the diagram will display whether it is the primary node or a replica, along with the region where that node is deployed to. Clicking any of the MySQL nodes will display the following metrics: * Database reads and writes for that node. * Queries served. * IOPS. * CPU and Memory utilization. * Storage utilization over the past week. Primary MySQL node metrics Primary MySQL node metrics Selecting a replica will display the replication lag in addition to the other metrics. Replication lag diagram Replication lag diagram ### Replication lag at a glance Within the infrastructure diagram, you'll also notice that there is a number near the connection points for each replica. These numbers are a way to read the replication lag between the Primary node and that given node at a glance. Replication lag Replication lag ### Database shards If your database is [sharded](/docs/vitess/sharding), the infrastructure diagram will represent that as a green stack of shards. Stacked shards Stacked shards Selecting the stack from the diagram will open a card displaying all of the shards belonging to that keyspace. Shard list Shard list After selecting a shard, you'll be able to choose to look at metrics for either that shard's primary or one of its replicas. Shard list Shard list Selecting one will show you the metrics for that specific node in your database architecture. Shard Shard ### Resizing You can use the [Clusters page](/docs/vitess/cluster-configuration) menu to resize your keyspaces. When a resize is in progress, this will be indicated at the top of the diagram. Architecture diagram with resize indicator Architecture diagram with resize indicator Click on "**View**" to see the status for each shard being resized: Per-shard resize status Per-shard resize status ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/vitess/audit-log.md # Source: https://planetscale.com/docs/security/audit-log.md # Source: https://planetscale.com/docs/cli/audit-log.md # PlanetScale CLI commands: audit log ## Getting Started Make sure to first [set up your PlanetScale developer environment](/docs/cli/planetscale-environment-setup). Once you've installed the `pscale` CLI, you can interact with PlanetScale and manage your databases straight from the command line. ## The `audit log` command Lists all [audit logs](/docs/security/audit-log) in an organization. The user running the command must have [Organization-level permissions](/docs/security/access-control), specifically `list_organization_audit_logs`. **Usage:** ```bash theme={null} pscale audit-log ``` ### Available sub-commands | **Sub-command** | **Description** | **Product** | | :-------------- | :------------------------------------- | :--------------- | | `list` | List all audit logs in an organization | Postgres, Vitess | ### Available flags | **Flag** | **Description** | | :-------------------------- | :------------------------------------------------------ | | `-h`, `--help` | View help for `audit-log` command | | `--action` | Filter based on action type | | `--limit` int | The number of events to return. Min: 1, Max: 100 | | `--starting-after` string | The ID of the audit log to start after (for pagination) | | `--org ` | The organization for the current user | ### Global flags | **Command** | **Description** | | :------------------------------ | :----------------------------------------------------------------------------------- | | `--api-token ` | The API token to use for authenticating against the PlanetScale API. | | `--api-url ` | The base URL for the PlanetScale API. Default is `https://api.planetscale.com/`. | | `--config ` | Config file. Default is `$HOME/.config/planetscale/pscale.yml`. | | `--debug` | Enable debug mode. | | `-f`, `--format ` | Show output in a specific format. Possible values: `human` (default), `json`, `csv`. | | `--no-color` | Disable color output. | | `--service-token ` | The service token for authenticating. | | `--service-token-id ` | The service token ID for authenticating. | ## Examples ### The `list` sub-command with `--org` flag **Command:** ```bash theme={null} pscale audit-log list --org ``` **Output:** ```bash theme={null} ID (25) ACTOR (25) ACTION EVENT REMOTE IP LOCATION CREATED AT ------------- ----------- ------------------------ ------------------------ --------------- ---------------- ------------ xxxxxxxxxx Name Open_web_console main branch.open_web_console xxx.xxx.xxx.x Los Angeles, CA 1 day ago ``` ### Pagination Use the ID from the last result and pass it as the `--starting-after` to retrieve the next page of results. ```bash theme={null} pscale audit-log list --limit 5 --starting-after ``` ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/postgres/imports/aurora-dms.md # Postgres Imports - Amazon DMS with CloudFormation > This method uses Infrastructure as Code with Step Functions workflow automation for a completely managed migration experience. [Amazon Database Migration Service (DMS)](https://aws.amazon.com/dms/) provides a fully automated approach to migrate your PostgreSQL database to PlanetScale Postgres. ## Overview This automated migration method: **Pre-migration schema setup** (essential for production) Deploys DMS infrastructure via CloudFormation template Configures source and target database endpoints automatically Creates Step Functions workflow for automated migration orchestration Provides built-in monitoring, notifications, and automated cleanup Requires minimal manual intervention - mostly console clicks **Critical: AWS DMS Schema Object Limitations** AWS DMS **only migrates table data and primary keys**. All other PostgreSQL schema objects must be handled separately: * Secondary indexes * Sequences and their current values * Views, functions, and stored procedures * Constraints (foreign keys, unique, check) * Triggers and custom data types Deploy your complete Aurora schema to PlanetScale BEFORE starting DMS migration to preserve performance and avoid application errors. This method requires an AWS account and will incur AWS DMS charges. The CloudFormation template includes cost optimization features. Review [AWS DMS pricing](https://aws.amazon.com/dms/pricing/) before proceeding. ## General Prerequisites Before starting the migration: * Have an active AWS user with CloudFormation, EC2, DMS, SNS, and Step Functions permissions * Source Aurora database accessible from AWS VPC * Connection details for your PlanetScale Postgres database from the console * VPC with at least 2 subnets in different Availability Zones ## Source database prerequisites The Task that AWS DMS runs will automatically perform these 7 validation checks before starting the migration. Confirm before starting this process that they will succeed. | Check Name | What It Validates | Required Action (if needed) | | :------------------------------ | :--------------------------------------------------------- | :-------------------------------------------------------------------------------- | | Database Version Compatibility | Verifies your PostgreSQL version is supported by AWS DMS | Ensure you're running a supported PostgreSQL version (9.4+) | | Target Database Privileges | Confirms PlanetScale user has sufficient permissions | No action should be needed - PlanetScale credentials include required permissions | | Replication Slots Available | Checks that replication slots are available for CDC | Verify `max_replication_slots >= 1` in Aurora parameter group | | Source Database Read Privileges | Validates source user can read all tables for migration | Ensure source user has SELECT privileges on all tables to migrate | | WAL Level Configuration | Confirms WAL level is set to 'logical' for CDC replication | Set `wal_level = logical` in Aurora parameter group (requires restart) | | WAL Sender Timeout | Ensures WAL sender timeout is at least 10 seconds | Set `wal_sender_timeout >= 10000` (10 seconds) in parameter group | | Maximum WAL Senders | Verifies enough WAL sender processes for CDC | Set `max_wal_senders >= 2` in Aurora parameter group | ## Step 1: Pre-Migration Schema Setup Deploy your complete Aurora schema to PlanetScale BEFORE starting the CloudFormation stack. This ensures optimal performance and prevents application errors. ### Extract and Apply Schema Extract your complete schema from Aurora: ```bash theme={null} pg_dump -h aurora-cluster-endpoint.amazonaws.com -p 5432 \ -U username -d database --schema-only \ --no-owner --no-privileges -f aurora_schema.sql ``` Apply the schema to PlanetScale: ```bash theme={null} psql -h your-planetscale-host -p 5432 -U username -d database -f aurora_schema.sql ``` **Foreign Key Constraints** If the schema application fails due to foreign key constraint issues, you can temporarily remove them from the schema file and apply them after DMS completes the data migration. ### Verify Schema Application Quickly verify your schema was applied successfully: ```sql theme={null} -- Check that tables and sequences exist SELECT (SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = 'public') as tables, (SELECT COUNT(*) FROM information_schema.sequences WHERE sequence_schema = 'public') as sequences, (SELECT COUNT(*) FROM pg_indexes WHERE schemaname = 'public') as indexes; ``` ## Step 2: Check DMS IAM Roles Before deploying, check if DMS service roles already exist in your AWS account: Go to [IAM Console](https://console.aws.amazon.com/iam/) Click "Roles" in the left sidebar Search for these role names: * `dms-vpc-role` * `dms-cloudwatch-logs-role` * **If neither role exists**: Set `CreateDMSRoles` parameter to `true` in Step 4 * **If both roles exist**: Set `CreateDMSRoles` parameter to `false` in Step 4 * **If one role exists but not the other**: Consider manually creating the roles per guidance in the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_VPC_Endpoints.html#CHAP_VPC_Endpoints.prereq) and set `CreateDMSRoles` parameter to `false` in Step 4 ## Step 3: Download CloudFormation Template Get the CloudFormation template: Visit: [https://github.com/planetscale/migration-scripts/tree/main/postgres-planetscale](https://github.com/planetscale/migration-scripts/tree/main/postgres-planetscale) Right-click on `aurora-to-ps-dms.yaml` → "Save link as" Save the file to your computer ## Step 4: Deploy CloudFormation Stack Navigate to [AWS CloudFormation Console](https://console.aws.amazon.com/cloudformation/) Click **"Create stack"** → **"With new resources (standard)"** Select **"Upload a template file"** Click **"Choose file"** and select the downloaded template Click **"Next"** ### Configure Stack Parameters **Stack name**: `postgres2planetscale` or any name you want but note that overly-long names can cause resource creation issues #### VPC Information * **VPC ID**: Select your VPC from dropdown * **Subnet IDs**: Select 2+ subnets in different AZs which are "public" subnets in that they have route tables through an Internet Gateway (IGW) and NACLs that allow outbound routing. See [Subnet types](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-types) in the AWS documentation for more information #### Source Database (Your Aurora Postgres database) * **Source Endpoint Host**: Your Aurora hostname (primary write endpoint is best, not a proxy) * **Source Port**: `5432` (or your custom port, valid range: 1024-65535) * **Source Database Name**: Your database name (1-63 characters, must start with a letter, alphanumeric and underscore only) * **Source Username**: Your database username (1-63 characters, must start with a letter, alphanumeric and underscore only) * **Source Password**: Your database password (4-128 characters, will be hidden in console) #### Target Database (PlanetScale Postgres) From your PlanetScale console connection details: * **Target Endpoint Host**: PlanetScale host (from connection details) * **Target Port**: `5432` (standard PostgreSQL port) * **Target Database Name**: PlanetScale database name (1-63 characters, must start with a letter, alphanumeric and underscore only) * **Target Username**: PlanetScale username (1-63 characters, must start with a letter, alphanumeric and underscore only) * **Target Password**: PlanetScale password (4-128 characters, will be hidden in console) #### Additional Features * **DMS Instance Class**: `dms.c6i.xlarge` (template default, can select from dropdown) * Options include: dms.t3.micro, dms.t3.small, dms.t3.medium, dms.t3.large, dms.c6i.large, dms.c6i.xlarge, dms.c6i.2xlarge, dms.c6i.4xlarge * Recommended: dms.c6i.large or larger for production workloads * **Migration Type**: `full-load-and-cdc` (recommended) * Options: `full-load`, `cdc`, `full-load-and-cdc` * **Migration Bucket Name**: Base name for S3 bucket that stores DMS assessment reports * Must be 3-35 characters, start/end with lowercase letter or number * Can contain lowercase letters, numbers, hyphens, and periods * Region and account ID will be automatically appended to create unique bucket name * Example: `my-migration-bucket` becomes `my-migration-bucket-us-east-1-123456789012` * **Enable Automation**: `true` ⭐ **Important: This creates the Step Functions workflow** * **Create DMS Roles**: `true` or `false` (based on Step 2 findings) * **Notification Email**: Your email address for migration status updates and alerts **Schema-First Approach Built-In** The CloudFormation template is pre-configured for schema-first migrations with: * `TargetTablePrepMode: DO_NOTHING` (automatically set, uses your existing schema) * Enhanced performance tuning settings * Built-in row-level validation * Batch processing optimizations * Memory tuning for large datasets Complete Step 1 (pre-migration schema setup) before deploying this stack for optimal results. Click **"Next"** → **"Next"** → Check **"I acknowledge that AWS CloudFormation might create IAM resources"** Click **"Submit"** ## Step 5: Wait for Stack Completion Stay on the CloudFormation console Click on your stack name to view details Watch the **"Events"** tab for progress Wait for stack status to show **`CREATE_COMPLETE`** (typically 10-15 minutes) ## Step 6: Confirm your email notification subscription Check the inbox for the email used above You will get an email from `no-reply@sns.amazonaws.com` "DMS Migration Workflow Notifications" Click the link for **Confirm Subscription** Note that after the migration task completes and the stack is deleted, you would receive no further communications from this AWS SNS Topic, but other SNS topics may use the same address. ## Step 7: Get Workflow Input Configuration In your completed CloudFormation stack, click the **"Outputs"** tab Find the output key **`StepFunctionsPayloadTemplate`** **Copy the entire JSON value** (this contains the configuration for your migration) The JSON should look like the following example: ```json theme={null} { "replicationInstanceArn": "arn:aws:dms:us-east-2:1234567890:rep:YMZ2AH4YAJCRNJKOWBR7EEIRGE", "sourceEndpointArn": "arn:aws:dms:us-east-2:1234567890:endpoint:SIVNPTNFJZDCVK4ODTN6ZLONN4", "targetEndpointArn": "arn:aws:dms:us-east-2:1234567890:endpoint:MLSCVENBKVBWJKRVJ27EWB32IU", "replicationTaskArn": "arn:aws:dms:us-east-2:1234567890:task:QZCBNW565VH2JG2KE5UXX42LS4", "sourceEndpointHost": "prod-aurora-cluster.cluster-abc1234567.us-east-2.rds.amazonaws.com", "sourcePort": "5432" } ``` ## Step 8: Locate Step Functions Workflow While still in the **"Outputs"** tab find the key **`StepFunctionsConsoleURL`** Click on the URL link to open the AWS Step Functions console for the workflow created here The workflow includes these automated steps: * **Network Connectivity Check**: Tests connections to both source and target databases * **Security Group Auto-Fix**: Automatically corrects Aurora security group settings if DMS connectivity fails * **Pre-Migration Validation**: Validates database schemas, table structures, and data types with row-level validation * **Migration Task Start**: Launches optimized DMS full-load and CDC replication with performance tuning * **Progress Monitoring**: Continuously monitors migration progress with enhanced error handling and batch processing * **Built-in Optimization**: Uses tuned task settings for improved throughput and memory management ## Step 9: Start Migration Workflow In the Step Functions state machine, click **"Start execution"** **Execution name**: Leave as auto-generated **Input**: **Paste the JSON** you copied from CloudFormation outputs Click **"Start execution"** The workflow will automatically: * Test database connections * Start the DMS migration task * Monitor progress * Send notifications * Handle errors and retries ## Step 10: Monitor Migration Progress There are several tools you can use to monitor the progress and if necessary troubleshoot potential failures. ### DMS Console Go to [DMS Console](https://console.aws.amazon.com/dms/) Click **"Tasks"** Select the task **Identifier** for this migration View your task for detailed table-level progress ### Step Functions Console Watch the visual workflow progress in the Step Functions console Each step will show green (success), red (failure), or blue (in progress) Click on individual steps to see detailed logs ### CloudWatch Dashboard Navigate to [CloudWatch Console](https://console.aws.amazon.com/cloudwatch/) Click **"Dashboards"** → **"Automatic Dashboards"** → **"DMS-Migration-Dashboard"** Monitor key metrics: * Full load progress percentage * CDC latency * Error counts * Throughput ### Wait for automated emails You will receive an email once the migration has reached a 100% full load and CDC replication is ongoing. If the workflow does fail at any point, you will instead receive an email with where the failure occurred and then you can review the previously mentioned tools for more information. ## Step 11: Post-Migration Sequence Synchronization After DMS completes, sequences need their values synchronized: **Critical: Sequence Synchronization** Sequence values must be set ahead of Aurora values to prevent duplicate key errors when applications start using PlanetScale. ### Get Current Sequence Values from Aurora ```sql theme={null} -- Run on Aurora database to get all current sequence values SELECT sequence_name, last_value, 'SELECT setval(''' || sequence_name || ''', ' || (last_value + 1000) || ');' as update_command FROM information_schema.sequences WHERE sequence_schema = 'public' ORDER BY sequence_name; ``` ### Update Sequences in PlanetScale ```sql theme={null} -- For each sequence, run the update command from above -- Example commands (values set ahead of Aurora): SELECT setval('users_id_seq', 16234); -- Aurora value + 1000 SELECT setval('orders_id_seq', 99765); -- Aurora value + 1000 SELECT setval('products_id_seq', 6432); -- Aurora value + 1000 -- Verify sequence values are ahead of Aurora SELECT sequence_name, last_value FROM information_schema.sequences WHERE sequence_schema = 'public' ORDER BY sequence_name; ``` ### Apply Remaining Constraints Now apply foreign key constraints that were deferred: ```sql theme={null} -- Apply foreign key constraints \i constraints.sql -- Verify constraints were applied successfully SELECT conname, contype, conrelid::regclass AS table_name FROM pg_constraint WHERE connamespace = 'public'::regnamespace AND contype = 'f' -- foreign key constraints ORDER BY conrelid::regclass::text; ``` ## Step 12: Application Cutover When the Step Functions workflow or DMS task itself indicates migration is ready (Status is "Load completed, replication ongoing"), then you can begin your cutover process. ### Comprehensive Pre-Cutover Validation **Complete Validation Required** Validate ALL schema objects and data integrity before cutover. Missing objects will cause application failures. ```sql theme={null} -- Validate table row counts match Aurora SELECT schemaname, tablename, n_tup_ins as estimated_rows FROM pg_stat_user_tables WHERE schemaname = 'public' ORDER BY tablename; ``` ### Pre-Cutover Checklist (Automated) AWS DMS ensures: * Full load is 100% complete * CDC latency is under 5 seconds * Data validation passes * Both databases are synchronized ### Cutover Process **Complete sequence synchronization and constraint application** using steps above **Run comprehensive validation** to ensure all objects are functional **Put application in maintenance mode**, pause all writes from your application to Aurora **Wait for DMS to confirm final sync** **Update your application's database connection strings** to use PlanetScale details **Restart or redeploy your application** **Test critical functionality**, especially features using sequences and indexes ## Automated Cleanup (mostly) **Schema Objects and Cleanup** The CloudFormation stack cleanup **does not** affect your migrated schema objects in PlanetScale. Your indexes, sequences, and other objects remain intact. Go to your CloudFormation stack Click **"Delete"** Click the **"Delete"** confirmation popup The first time you attempt to delete the stack, the full process will fail to delete some of the resources. Minimally the S3 bucket created to store the DMS pre-migration test reports will need to be manually emptied before it can be deleted. If the Step Functions workflow had to modify Aurora security group, then the rule added needs to be deleted as well. Both of these resources could be safely left behind, however the S3 bucket's data will incur ongoing charges. In the CloudFormation stack **"Resources"** tab, find the resources where deletion failed Find the Resource named **PreMigrationAssessmentBucket** Click to open the link under the **Physical ID** heading for this resource This will take you to the Amazon S3 console for this bucket. The assessment-folder is a versioned object in S3, which means that directly deleting it here does not actually remove it, but instead places a deleted marker on the version. To fully empty the bucket you will need to navigate to the main console page. Click **"Buckets"** from the top nav, or **"General purpose buckets"** from the left nav Select the bucket used, it will start with the name of the CloudFormation stack Click **"Empty"** You can now re-attempt the stack deletion from the 1st step here ## Troubleshooting ### Stack Creation Issues **Permission Errors:** * Ensure your AWS user has CloudFormation, DMS, Step Functions, and IAM permissions * Check that you acknowledged IAM resource creation during stack creation **Network Issues:** * Verify your VPC allows internet access for DMS to reach PlanetScale * Check security groups allow port 5432 access * Ensure subnets are in different Availability Zones ### Step Functions Workflow Issues **Workflow Creation Fails:** * Verify you copied the complete JSON from CloudFormation outputs * Check that the Step Functions execution role exists **Migration Task Fails:** * Check Step Functions execution details for specific error messages * Verify database connection details are correct * Ensure source database has logical replication enabled ### Connection Problems **Source Database:** * Verify hostname, port, username, and password * Check that source database allows connections from DMS subnet * Ensure database has logical replication enabled (`rds.logical_replication = 1` for RDS) **Target Database (PlanetScale):** * Double-check connection details from PlanetScale console * Verify PlanetScale database is active and accessible * Test connectivity from AWS region to PlanetScale ### Schema-Related Issues **"sequence does not exist" errors after cutover:** ```sql theme={null} -- Check if sequences exist SELECT * FROM information_schema.sequences WHERE sequence_name = 'your_sequence'; -- Recreate missing sequence CREATE SEQUENCE your_sequence START WITH 1; SELECT setval('your_sequence', (SELECT MAX(id) FROM your_table)); ``` **Application slowness after migration:** * Missing indexes are the most common cause * Run `EXPLAIN ANALYZE` on slow queries to identify missing indexes * Apply indexes from your schema extraction **Foreign key constraint violations:** ```sql theme={null} -- Find constraint violations before applying constraints SELECT COUNT(*) FROM child_table c WHERE NOT EXISTS (SELECT 1 FROM parent_table p WHERE p.id = c.parent_id); ``` **Function/view dependency errors:** * Apply objects in correct order: sequences → indexes → views → functions → constraints * Check for Aurora-specific functions that may need modification for PlanetScale **Permission errors during schema application:** * Ensure PlanetScale user has CREATE privileges * Check if objects already exist and need to be dropped first ## Step Functions Workflow Benefits Using the automated Step Functions workflow provides: * **Visual Progress Tracking**: See each migration phase in real-time * **Automatic Error Handling**: Built-in retry logic and error notifications * **Audit Trail**: Complete log of migration steps and timings ## Advanced Options ### CloudFormation Template Optimizations The updated CloudFormation template includes these performance enhancements: **Task Configuration Improvements:** * `BatchApplyEnabled: true` - Improves target database write performance * `ValidationMode: ROW_LEVEL` - Built-in data validation with 10K failure tolerance * Memory tuning: 2GB total memory limit with optimized batch sizing * Enhanced CDC processing with 5-second commit timeout * Statement caching for improved query performance **Monitoring Enhancements:** * Comprehensive logging for all DMS components * CloudWatch integration for real-time metrics * Automated failure handling and notifications **Schema-First Integration:** * `TargetTablePrepMode: DO_NOTHING` preserves your pre-deployed schema * `FullLoadIgnoreConflicts: true` handles edge cases gracefully * Optimized for existing table structures and indexes ### Custom Migration Settings Modify template parameters for: * Different DMS instance sizes * Custom migration types (full-load only, CDC only) * Extended monitoring periods * Custom notification settings ### Multiple Database Migration Deploy multiple stacks with different names to migrate multiple databases in parallel. ## Migration Considerations Before migration, review: **Important:** Allow additional time for post-migration schema object setup. Aurora databases with many indexes or complex constraints may require several hours for complete schema migration. ## Support and Resources For simpler migrations, consider [pg\_dump/restore](/docs/postgres/imports/postgres-migrate-dumprestore), or [logical replication](/docs/postgres/imports/postgres-migrate-walstream) methods. **Post-Migration Success Checklist:** * ✅ All schema objects migrated and validated * ✅ Sequence values synchronized with Aurora * ✅ Application performance matches pre-migration levels * ✅ All critical application features tested * ✅ Constraints and foreign keys working correctly * ✅ No application errors in logs for 24+ hours * ✅ Query performance baseline established **Migration Timeline Expectations with Optimized Template:** * Schema setup: 30 minutes - 2 hours (depending on complexity) * DMS full load: Improved by \~25-40% due to batch processing optimizations * Small databases (under 10GB): 30 minutes - 2 hours * Medium databases (10-100GB): 2-6 hours * Large databases (100GB+): 4-12 hours * Sequence synchronization: 5-15 minutes * Validation and cutover: 30-60 minutes * Total downtime for cutover: 5-30 minutes **Performance Improvements:** * Batch apply processing reduces target database load * Enhanced memory management improves large table handling * Row-level validation catches issues early without stopping migration ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/postgres/imports/aurora.md # Migrate from Aurora to PlanetScale > Use this guide to migrate an existing Aurora (Postgres) database to PlanetScale Postgres. This guide will cover a no-downtime approach to migrating using Postgres logical replication. If you are willing to tolerate downtime during a maintenance window, you may also use [`pg_dump` and restore](/docs/postgres/imports/postgres-migrate-dumprestore). The `pg_dump`/restore approach is simpler, but is only for applications where downtime is acceptable. This guide assumes that public internet access is enabled on your Aurora database, as it will be needed to connect and replicate to the new PlanetScale host. If you cannot enable this due to security policies, consider using [AWS DMS](/docs/postgres/imports/aurora-dms) for your migration, or [contact support](https://planetscale.com/contact?initial=support) for more specific guidance. These instructions work for all versions of Postgres that support logical replication (version 10+). If you have an older version you want to bring to PlanetScale, [contact us](https://planetscale.com/contact?initial=support) for guidance. Before beginning a migration, you should check our [extensions documentation](/docs/postgres/extensions) to ensure that all of the extensions you rely on will work on PlanetScale. As an alternative to this guide, you can also try our [Postgres migration scripts](https://github.com/planetscale/migration-scripts/tree/main/postgres-direct). These allow you to automate some of the manual steps that we describe in this guide. ## 1. Prepare your PlanetScale database Go to `app.planetscale.com` and create a new database. A few things to check when configuring your database: * Ensure you select the correct cloud region. You typically want to use the same region that you deploy your other application infrastructure to. * This guide assumes you are migrating from a Postgres Aurora database, so also choose the Postgres option in PlanetScale. * Choose the best storage option for your needs. For applications needing high-performance and low-latency I/O, use [PlanetScale Metal](/docs/metal). For applications that need more flexible storage options or smaller compute instances, choose "Elastic Block Storage" or "Persistent Disk." Create a new PlanetScale Postgres database Once the database is created and ready, navigate to your dashboard and click the "Connect" button. Connect to a PlanetScale Postgres database From here, follow the instructions to create a new default role. This role will act as your admin role, with the highest level of privileges. Though you may use this one for your migration, we recommend you use a separate role with lesser privileges for your migration and general database connections. To create a new role, navigate to the [Role management page](/docs/postgres/connecting/roles) in your database settings. Click "New role" and give the role a memorable name. By default, `pg_read_all_data` and `pg_write_all_data` are enabled. In addition to these, enable `pg_create_subscription` and `postgres`, and then create the role. New Postgres role privileges Copy the password and all other connection credentials into environment variables for later use: ```bash theme={null} PLANETSCALE_USERNAME=pscale_api_XXXXXXXXXX.XXXXXXXXXX PLANETSCALE_PASSWORD=pscale_pw_XXXXXXXXXXXXXXXXXXXXXXX PLANETSCALE_HOST=XXXX.pg.psdb.cloud PLANETSCALE_DBNAME=postgres ``` We also recommend that you increase `max_worker_processes` for the duration of the migration, in order to speed up data copying. Go to the "Parameters" tab of the "Clusters" page: Configure parameters On this page, increase this value from the default of `4` to `10` or more: Configure max worker processes You can decrease these values after the migration is complete. ## 2. Configure disk size on PlanetScale If you are importing into a database backed by network-attached storage, you must configure your disk in advance to ensure your database will fit. Though we support disk autoscaling for these, AWS and GCP limit how frequently disks can be resized. If you don't ensure your disk is large enough for the import in advance, it will not be able to resize fast enough for a large data import. To configure this, navigate to "Clusters" and then the "Storage" tab: Storage configuration min size On this page, adjust the "Minimum disk size." You should set this value to at least 150% of the size of the database you are migrating. For example, if the database you are importing is 330 GB, you should set your minimum disk size to at least 500 GB. The 50% overhead is to account for: 1. Data growth during the import process and 2. Table and index bloat that can occur during the import process. This can be later mitigated with careful [VACUUMing](https://www.postgresql.org/docs/current/sql-vacuum.html) or using an extension like [pg\_squeeze](https://planetscale.com/docs/postgres/extensions/pg_squeeze), but is difficult to avoid during the migration itself. When ready, queue and apply the changes. You can check the "Changes" tab to see the status of the resize: Confirm disk size change Wait for it to indicate completion. If you are importing to a Metal database, you must choose a disk size when first creating your database. You should launch your cluster with a disk size at least 50% larger than the storage used by your current source database (150% of the existing total). As an example, if you need to import a 330 GB database onto a PlanetScale `M-160` there are three storage sizes available: Metal disk size You should use the largest, 1.25TB option during the import. After importing and cleaning up table bloat, you may be able to downsize to the 468 GB option. Resizing is a no-downtime operation that can be performed on the [Clusters](https://planetscale.com/docs/postgres/cluster-configuration) page. ## 3. Prepare the Aurora database For PlanetScale to import your database, it needs to be publicly accessible. You can check this in your AWS dashboard. In the writer instance of your database cluster, go to the “Connectivity & security” tab, and under “Security” you will see if your database is publicly accessible. If it says “No,” you will need to change it to be publicly accessible through the “Modify” button. If this is an issue, you cannot do this, or you have questions, please [contact support](https://planetscale.com/contact?initial=support) to explore your migration options. You will also need to change some parameters and ensure that logical replication is enabled. If you don't already have a parameter group for your Aurora cluster, create one from the "Parameter groups" page in the AWS console: AWS parameter groups From here, click the button to create a new group. Choose whichever name and description you want. Set the `Engine type` to `Aurora Postgres` and the `Parameter family group` to the version that matches your Aurora Postgres database. Set the `Type` to `DB Cluster Parameter Group`. Create an AWS parameter group If you already have a custom parameter group for your cluster, you can use the existing one instead. The two key parameters you need to update are adding `pglogical` to `shared_preload_libraries` and setting `rds.logical_replication` to `1`: Preload libraries parameter Logical replication parameter Once these are set, you need to make sure your Aurora database is configured to use them. Navigate to your Aurora database in the AWS console, click the "Modify" button, and then ensure your database is using the parameter group: Set parameter group for Aurora When you go to save the changes, select the option to either apply immediately or during your next maintenance window. The changes may take time to propagate. You can confirm that the `wal_level` is set to `logical` by running `SHOW wal_level;` on your Aurora database: ```sql theme={null} postgres=> SHOW wal_level; wal_level ----------- logical ``` If you see a result other than `logical`, then it is not configured correctly. If you are having trouble getting the settings to propagate, you can try restarting the Aurora instance, though that will cause a period of downtime. ## 4. Copy schema from Aurora to PlanetScale Before we begin migrating data, we first must copy the schema from Aurora to PlanetScale. We do this as a distinct set of steps using `pg_dump`. You should not make any schema changes during the migration process. You may continue to select, insert, update, and delete data, keeping your application fully online during this process. For these instructions, you'll need to connect to Aurora with a role that has permissions to create replication publications and read all data. Your default role that was generated by Aurora when you first created your database should suffice here, but you may also use other roles. We will assume that the credentials for this user and other connection info are stored in the following environment variables. ```bash theme={null} AURORA_USERNAME=XXXX AURORA_PASSWORD=XXXX AURORA_HOST=XXX AURORA_DBNAME=XXX ``` Run the below command to take a snapshot of the full schema of the `$AURORA_DBNAME` that you want to migrate: ```bash theme={null} PGPASSWORD=$AURORA_PASSWORD \ pg_dump -h $AURORA_HOST \ -p 5432 \ -U $AURORA_USERNAME \ -d $AURORA_DBNAME \ --schema-only \ --no-owner \ --no-privileges \ -f schema.sql ``` This saves the schema into a file named `schema.sql`. The above command will dump the tables for all schemas in the current database. If you want to migrate only one specific schema, you can add the `--schema=SCHEMA_NAME` option. The schema then needs to be loaded into your new PlanetScale database: ```bash theme={null} PGPASSWORD=$PLANETSCALE_PASSWORD \ psql -h $PLANETSCALE_HOST \ -p 5432 \ -U $PLANETSCALE_USERNAME \ -d $PLANETSCALE_DBNAME \ -f schema.sql ``` In the output of this command, you might see some error messages of the form: ``` psql:schema.sql:LINE: ERROR: DESCRIPTION ``` You should inspect these to see if they are of any concern. You can [reach out to our support](https://planetscale.com/contact) if you need assistance at this step. ## 5. Set up logical replication We now must create a `PUBLICATION` on Aurora that the PlanetScale database can subscribe to for data copying and replication. To create a publication for all tables in all schemas of the current database, run the following command on your Aurora database: ```sql theme={null} CREATE PUBLICATION replicate_to_planetscale FOR ALL TABLES; ``` You should see this if it created correctly: ```sql theme={null} CREATE PUBLICATION ``` To publish changes for only one specific schema, run the following query: ```sql theme={null} SELECT 'CREATE PUBLICATION replicate_to_planetscale FOR TABLE ' || string_agg(format('%I.%I', schemaname, tablename), ', ') || ';' FROM pg_tables WHERE schemaname = 'YOUR_SCHEMA_NAME'; ``` This will generate a query that looks like this: ```sql theme={null} CREATE PUBLICATION replicate_to_planetscale FOR TABLE public.table_1, public.table_2, ... public.table_n; ``` You can then copy/paste this and execute on Aurora. This will create a publication that only publishes changes for the tables in `YOUR_SCHEMA_NAME` After creating the publication on Aurora, we then need to tell PlanetScale to `SUBSCRIBE` to this publication. ```sql theme={null} PGPASSWORD=$PLANETSCALE_PASSWORD psql \ -h $PLANETSCALE_HOST \ -U $PLANETSCALE_USERNAME \ -p 5432 $PLANETSCALE_DBNAME \ -c " CREATE SUBSCRIPTION replicate_from_aurora CONNECTION 'host=$AURORA_HOST dbname=$AURORA_DBNAME user=$AURORA_USERNAME password=$AURORA_PASSWORD' PUBLICATION replicate_to_planetscale WITH (copy_data = true);" ``` Data copying and replication will begin at this point. To check in on the row counts for the tables, you can run a query like this on your source and target databases: ```sql theme={null} SELECT table_name, row_count FROM ( SELECT 'table_name_1' as table_name, COUNT(*) as row_count FROM table_name_1 UNION ALL SELECT 'table_name_2', COUNT(*) FROM table_name_2 UNION ALL ... SELECT 'table_name_N', COUNT(*) FROM table_name_N ) t ORDER BY table_name; ``` When the row counts match (or nearly match) you can begin testing and prepare for your application to cutover to use PlanetScale. ## 6. Handling sequences Logical replication is great at migrating all of your data over to PlanetScale. However, logical replication does *not* synchronize the `nextval` values for [sequences](https://www.postgresql.org/docs/current/sql-createsequence.html) in your database. Sequences are often used for things like auto incrementing IDs, so it's important to ensure we update this before you switch your traffic to PlanetScale. You can see all of the sequences and their corresponding `nextval`s on your source Aurora database using this command: ```sql theme={null} SELECT schemaname, sequencename, last_value + increment_by AS next_value FROM pg_sequences; ``` An example output from this command: ```sql theme={null} schemaname | sequencename | next_value ------------+------------------+------------ public | users_id_seq | 105 public | posts_id_seq | 1417 public | followers_id_seq | 3014 ``` What this means is that we have three sequences in our database. In this case, they are all being used for auto-incrementing primary keys. The `nextval` for the `users_id_seq` is 105, the `nextval` for the `posts_id_seq` is 1417, and the `nextval` for the `followers_id_seq` is 3014. If you run the same query on your new PlanetScale database, you'll see something like: ```sql theme={null} schemaname | sequencename | next_value ------------+------------------+------------ public | users_id_seq | 0 public | posts_id_seq | 0 public | followers_id_seq | 0 ``` If you switch traffic over to PlanetScale in this state, you'll likely encounter errors when inserting new rows: ```sql theme={null} ERROR: duplicate key value violates unique constraint "XXXX" DETAIL: Key (id)=(ZZZZ) already exists. ``` Before switching over, you need to progress all of these sequences forward so that the `nextval`s produced will be greater than any of the values previously produced on the source Aurora database, avoiding constraint violations. There are several approaches you can take for this. A simple way to solve the problem is to first run this query on your source Aurora database: ```sql theme={null} SELECT 'SELECT setval(''' || schemaname || '.' || sequencename || ''', ' || (last_value + 10000) || ');' AS query FROM pg_sequences; ``` This will generate a sequence of queries that will advance the `nextval` by 10,000 for each sequence: ```sql theme={null} query -------------------------------------------------- SELECT setval('public.users_id_seq', 10104); SELECT setval('public.posts_id_seq', 11416); SELECT setval('public.followers_id_seq', 13013); ``` You would then execute these on your target PlanetScale database. You need to ensure you advance each sequence far enough forward so that the sequences in the Aurora database will not reach these `nextval`s before you switch your primary to PlanetScale. For tables that have a high insertion rate, you might need to increase this by a larger value (say, 100,000 or 1,000,000). ## 7. Cutting over to PlanetScale Before you cutover, it's good to have confidence that the replication is fully caught up between Aurora and PlanetScale. You can do this using Log Sequence Numbers (LSNs). The goal is to see these match up between the source Aurora database and the target PlanetScale database exactly. If they don't, it indicates that the PlanetScale database is not fully caught-up with the changes happening on Aurora. You can run this on Aurora to see the current LSN: ```sql theme={null} postgres=> SELECT pg_current_wal_lsn(); pg_current_wal_lsn -------------------- 0/703FE460 ``` Then on PlanetScale, you would run the following query to check for a match: ```sql theme={null} postgres=> SELECT received_lsn, latest_end_lsn FROM pg_stat_subscription WHERE subname = 'replicate_from_aurora'; received_lsn | latest_end_lsn --------------+---------------- 0/703FE460 | 0/703FE460 ``` Once you are comfortable that all your data has successfully copied over and replication is sufficiently caught up, it's time to switch to PlanetScale. In your application code, prepare the cutover by changing the database connection credentials to go to PlanetScale rather than Aurora. Then, you can deploy this new version of your application, which will begin using PlanetScale as your primary database. After doing this, new rows written to PlanetScale will not be reverse-replicated to Aurora. Thus, it's important to ensure you are fully ready for the cutover at this point. Once this is complete, PlanetScale is now your primary database! We recommend you keep your old database around for at least a few days, just in case you discover any data or schemas you forgot to copy over to PlanetScale. ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/cli/auth.md # PlanetScale CLI commands: auth ## Getting Started Make sure to first [set up your PlanetScale developer environment](/docs/cli/planetscale-environment-setup). Once you've installed the `pscale` CLI, you can interact with PlanetScale and manage your databases straight from the command line. ## The `auth` command This command allows you to login, logout, and refresh your authentication. Auth tokens generated by `pscale` are valid for one month. **Usage:** ```bash theme={null} pscale auth ``` ### Available sub-commands | **Sub-Command** | **Description** | **Product** | | :-------------- | :------------------------------------ | :--------------- | | `login` | Authenticate with the PlanetScale API | Postgres, Vitess | | `logout` | Log out of the PlanetScale API | Postgres, Vitess | | `check` | Check if you are authenticated | Postgres, Vitess | ### Available flags | **Flag** | **Description** | | :------------- | :--------------------------- | | `-h`, `--help` | View help for `auth` command | ### Global flags | **Command** | **Description** | | :------------------------------ | :----------------------------------------------------------------------------------- | | `--api-token ` | The API token to use for authenticating against the PlanetScale API. | | `--api-url ` | The base URL for the PlanetScale API. Default is `https://api.planetscale.com/`. | | `--config ` | Config file. Default is `$HOME/.config/planetscale/pscale.yml`. | | `--debug` | Enable debug mode. | | `-f`, `--format ` | Show output in a specific format. Possible values: `human` (default), `json`, `csv`. | | `--no-color` | Disable color output. | | `--service-token ` | The service token for authenticating. | | `--service-token-id ` | The service token ID for authenticating. | ## Examples ### The `login` sub-command **Command:** ```bash theme={null} pscale auth login ``` **Output:** A new browser tab will open and ask you to sign in via browser if you're not already signed in. Next, you'll be asked to confirm the Device confirmation code displayed in your terminal: ```bash theme={null} Confirmation Code: XXXXXXX ``` If they match, click "Confirm code", and you'll be signed in to the CLI. ### The `logout` sub-command **Command:** ```bash theme={null} pscale auth logout ``` **Output:** ```bash theme={null} Press Enter to log out of the PlanetScale API. ``` ### The `check` sub-command **Command:** ```bash theme={null} pscale auth check ``` **Output:** ```bash theme={null} You are authenticated. ``` If you are not authenticated, exit code 1 will be returned. ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/security/authentication-methods.md # Authentication methods > There are three ways to authenticate with PlanetScale: _email address and password_, _single sign-on_, and _OAuth via GitHub_. ## Overview Let's break down how each of these work. ## Email address and password This is the only authentication mechanism where PlanetScale maintains user credentials. Additionally, users can opt to configure [two-factor authentication (2FA)](/docs/security/multi-factor-authentication). This option requires **something you know** *(i.e. your password)* and **something you have** *(i.e. recovery codes)*. ## Single sign-on Users can authenticate with their chosen corporate identity provider *(i.e. Okta)* instead of maintaining passwords with PlanetScale. Once [SSO](/docs/security/sso) is enabled for an `organization`, all members are redirected through that identity provider's authentication flow. Moving forward, they must pass through SSO to access their PlanetScale account. ## OAuth via GitHub Users can authenticate with PlanetScale using their GitHub account. PlanetScale doesn't maintain the passwords for these accounts. Losing access to your GitHub account prevents accessing your PlanetScale account. Enabling SSO removes OAuth access for all members of your *organization*, meaning they will no longer be able to sign in with their GitHub credentials. ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/vitess/tutorials/automatic-prisma-migrations.md # Automatic Prisma migrations This document has been updated to include the recommended Prisma and PlanetScale workflow, specifically the recommendation to use `prisma db push` instead of `prisma migrate dev` with shadow branches. Also, you previously needed to turn on the ability to automatically copy the Prisma migration metadata. You no longer need to do this. Read more below. ## Introduction In this tutorial, we're going to learn how to do Prisma migrations in PlanetScale as part of your deployment process using `prisma db push`. ### Quick introduction to Prisma's db push From a high level, [Prisma's `db push`](https://www.prisma.io/docs/orm/reference/prisma-cli-reference#db-push) introspects your PlanetScale database to infer and execute the changes required to make your database schema reflect the state of your Prisma schema. When `prisma db push` is run, it will ensure the schema in the PlanetScale branch you are currently connected to matches your current Prisma schema. We recommend `prisma db push` over `prisma migrate dev` for the following reasons: PlanetScale provides [Online Schema Changes](/docs/vitess/schema-changes/how-online-schema-change-tools-work) that are deployed automatically when you merge a deploy request and prevents [blocking schema changes](/docs/vitess/schema-changes) that can lead to downtime. This is different from the typical Prisma workflow which uses `prisma migrate` in order to generate SQL migrations for you based on changes in your Prisma schema. When using PlanetScale with Prisma, the responsibility of applying the changes is on the PlanetScale side. Therefore, there is little value to using `prisma migrate` with PlanetScale. Also, the migrations table created when `prisma migrate` runs can also be misleading since PlanetScale does the actual migration when the deploy request is merged, not when `prisma migrate` is run which only updates the schema in the development database branch. You can still see the history of your schema changes in PlanetScale. ## Prerequisites * Add Prisma to your project using `npm install prisma --save-dev` or `yarn add prisma --dev` (depending on what package manager you prefer). * Run `npx prisma init` inside of your project to create the initial files needed for Prisma. * Install the [PlanetScale CLI](https://github.com/planetscale/cli). * Authenticate the CLI with the following command: ```bash theme={null} pscale auth login ``` ## Execute your first Prisma db push Prisma migrations follow the PlanetScale [non-blocking schema change](/docs/vitess/schema-changes) workflow. First, the schema is applied to a *development* branch and then the development branch is merged into the `main` production database. Let's begin with an example flow for running Prisma migrations in PlanetScale: Create a new *prisma-playground* database: ```bash theme={null} pscale db create prisma-playground ``` Connect to the database branch: ```bash theme={null} pscale connect prisma-playground main --port 3309 ``` This step assumes you created a new PlanetScale database and have not yet enabled [Safe Migrations](/docs/vitess/schema-changes/safe-migrations) on the `main` branch. You will need to create a new development branch otherwise. Update your `prisma/schema.prisma` file with the following schema: In Prisma `4.5.0`, `referentialIntegrity` changed to `relationMode` and became generally available in `4.7.0`. The following schema reflects this change. You can learn more about Prisma's Relation mode in the [Prisma docs](https://www.prisma.io/docs/orm/prisma-schema/data-model/relations/relation-mode). ```js expandable theme={null} datasource db { provider = "mysql" url = env("DATABASE_URL") relationMode = "prisma" } generator client { provider = "prisma-client-js" } model Post { id Int @default(autoincrement()) @id createdAt DateTime @default(now()) updatedAt DateTime @updatedAt title String @db.VarChar(255) content String? published Boolean @default(false) author User @relation(fields: [authorId], references: [id]) authorId Int } model Profile { id Int @default(autoincrement()) @id bio String? user User @relation(fields: [userId], references: [id]) userId Int @unique } model User { id Int @default(autoincrement()) @id email String @unique name String? posts Post[] profile Profile? } ``` Update your `.env` file: ```shell theme={null} DATABASE_URL="mysql://root@127.0.0.1:3309/prisma-playground" ``` In another terminal, use the `db push` command to push the schema defined in `prisma/schema.prisma`: ```bash theme={null} npx prisma db push ``` Unlike the `prisma migrate dev` command, it will not create a migrations folder containing a SQL file with the SQL used to update the schema in your PlanetScale database. PlanetScale will be tracking your migrations in this workflow. You can learn more about the `prisma db push` command in the [Prisma docs](https://www.prisma.io/docs/orm/reference/prisma-cli-reference#db-push). After `db push` is successful, you can see the table created in your terminal. For example, to see the `Post` table: ```bash theme={null} pscale shell prisma-playground main ``` ```sql theme={null} describe Post; ``` Use the `exit` command to exit the MySQL shell. Or you can see it in the PlanetScale UI under the Schema tab in your `main` branch. Finally, turn on safe migrations on the `main` branch to enable non-blocking schema changes: ```bash theme={null} pscale branch safe-migrations enable prisma-playground main ``` ## Execute succeeding Prisma migrations in PlanetScale Our first example migration flow went well, but what happens when you need to run further changes to your schema? Let's take a look: Create a new *development* branch from `main` called `add-subtitle-to-posts`: ```bash theme={null} pscale branch create prisma-playground add-subtitle-to-posts ``` Close the proxy connection to your `main` branch (if still open) and connect to the new `add-subtitle-to-posts` development branch: ```bash theme={null} pscale connect prisma-playground add-subtitle-to-posts --port 3309 ``` In the `prisma/schema.prisma` file, update the `Post` model: Add a new `subtitle` field to `Post`: ``` subtitle String @db.VarChar(255) ``` Run `db push` again to update the schema in PlanetScale: ```bash theme={null} npx prisma db push ``` Open a deploy request for your `add-subtitle-to-posts` branch, so that you can deploy these changes to `main`. You can complete the deploy request either in the web app or with the `pscale deploy-request` command. ```bash theme={null} pscale deploy-request create prisma-playground add-subtitle-to-posts ``` ```bash theme={null} pscale deploy-request deploy prisma-playground 1 ``` Once the deploy request is merged, you can see the results in your main branch's `Post` table: ```bash theme={null} pscale shell prisma-playground main ``` ```sql theme={null} describe Post; ``` ## What's next? Now that you've successfully conducted your first automatic Prisma migration in PlanetScale and know how to handle future migrations, it's time to deploy your application with a PlanetScale database! Let's learn how to [deploy an application with a PlanetScale database to Vercel](/docs/vitess/tutorials/deploy-to-vercel). ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/vitess/tutorials/automatic-rails-migrations.md # Automatic Rails migrations If you are using PlanetScale with a Rails application, go to your database's Settings page in the web app and enable "Automatically copy migration data." Select "Rails/Phoenix" as the migration framework. When enabled, this setting updates the *schema\_migrations* table each time you branch with the latest migration. If disabled, running *rake db:migrate* will try to run all migrations every time, instead of only the latest one. ## Introduction In this tutorial, you're going to learn how Rails migrations work with the PlanetScale branching and deployment workflows. Migration tracking works with any migration tool, not just Rails. For other frameworks, specify the migration table name on your database's Settings page. ## Prerequisites Follow the [Connect a Rails app](/docs/vitess/tutorials/connect-rails-app) tutorial first. By the end, you will have: * Installed the [PlanetScale CLI](https://github.com/planetscale/cli), Ruby, and the Rails gem * Created a PlanetScale database named `blog` * Started a new Rails app named `blog` with a migration creating a `Users` table * Run the first Rails migration ### A quick introduction to Rails migrations Rails tracks an application's migrations in an internal table called `schema_migrations`. At a high level, running `rake db:migrate` does the following: * Rails looks at all of the migration files in your `db/migrate` directory. * Rails queries the `schema_migrations` table to see which migrations have and haven't been run. * Any migration that doesn’t appear in the `schema_migrations` table is considered pending and is executed by this task. When you merge a deploy request in PlanetScale, the *schema\_migrations* table in *main* is automatically updated with the migration data from your branch. ## Execute a Rails migration on PlanetScale Rails migrations follow the PlanetScale [non-blocking schema change](/docs/vitess/schema-changes) workflow. First, the migration is applied to a *development* branch, and then the development branch is merged into the `main` production branch with [safe migrations](/docs/vitess/schema-changes/safe-migrations) enabled. Let's add another table to your existing `blog` schema: Create an `add-posts-table` development branch from `main` in your database *blog*: ```bash theme={null} pscale branch create blog add-posts-table ``` When the branch is ready, you can verify that the `schema_migrations` table is up-to-date with `main` by checking for the timestamp of your `Create Users` migration file. Your migration will have a different timestamp than the one shown here. Check the timestamp in your codebase: ```bash theme={null} ls db/migrate 20211014210422_create_users.rb ``` Connect to the new branch: ```bash theme={null} pscale shell blog add-posts-table ``` Query the migration table: ```sql theme={null} blog/add-posts-table> select * from schema_migrations; +----------------+ | version | +----------------+ | 20211014210422 | +----------------+ ``` Connect your development environment to the new branch: One way to do this is to create a new password for the `add-posts-table` branch and update `config/database.yml` with the new username, password, and host. Another is to use `pscale connect` to establish a secure connection on a local port. Since the `add-posts-table` branch won't be needed after the migration, let's use the `pscale connect` proxy. In a separate terminal, establish the connection: ```bash theme={null} pscale connect blog add-posts-table --port 3309 ``` Then, update `config/database.yml` to connect through the proxy: ```yaml theme={null} development: <<: *default adapter: trilogy database: blog host: 127.0.0.1 port: 3309 ``` Create the second Rails migration and call it `CreatePosts`: ```bash theme={null} rails generate migration CreatePosts ``` Find the new migration file in `db/migrate` and add a few details for the new Posts table: ```ruby theme={null} class CreatePosts < ActiveRecord::Migration[7.0] def change create_table :posts do |t| t.string :title t.text :content t.bool :published t.references :user t.timestamps end end end ``` Run the CreatePosts migration: ```bash theme={null} rake db:migrate ``` This command runs the new migration against your `add-posts-table` *development* branch. At this point, Rails creates the `posts` table and inserts another `timestamp` into the `schema_migrations` table on your development branch. You can verify the change in `schema_migrations` yourself: ```sql theme={null} blog/add-posts-table> select * from schema_migrations; +----------------+ | version | +----------------+ | 20211014210422 | | 20220224221753 | +----------------+ ``` Open a deploy request for your `add-posts-table` branch, and deploy your changes to `main`. You can complete the deploy request either in the web app or with the `pscale deploy-request` command. ```bash theme={null} pscale deploy-request create blog add-posts-table ``` ```bash theme={null} pscale deploy-request deploy blog 1 ``` To create the deploy request, PlanetScale looks at the differences between the schemas of `main` and `add-posts-table` and plans a `create table` statement to add the new table to `main`. When you deploy, PlanetScale runs that ` create table` statement and copies the second row from `schema_migrations` in `add-posts-table` to the `schema_migrations` table in `main`.\` Verify the changes in your `main` branch: In a `pscale` shell for `main` you can verify that the changes from `add-posts-table` were deployed successfully. ```bash theme={null} pscale shell blog main ``` ```sql theme={null} blog/|⚠ main ⚠|> show tables; +----------------------+ | Tables_in_blog | +----------------------+ | posts | | schema_migrations | | users | +----------------------+ blog/|⚠ main ⚠|> select * from schema_migrations; +----------------+ | version | +----------------+ | 20220223232425 | | 20220224221753 | +----------------+ ``` ## Summary In this tutorial, we learned how to use the PlanetScale deployment process with the Rails migration workflow. ## What's next? Learn more about how PlanetScale allows you to make [schema changes](/docs/vitess/schema-changes) to your production databases without downtime or locking tables. ## Need help? Get help from [the PlanetScale Support team](https://support.planetscale.com/), or join our [GitHub discussion board](https://github.com/planetscale/discussion/discussions) to see how others are using PlanetScale. --- > To find navigation and other pages in this documentation, fetch the llms.txt file at: https://planetscale.com/llms.txt --- # Source: https://planetscale.com/docs/vitess/sharding/avoiding-cross-shard-queries.md # Avoiding cross-shard queries export const YouTubeEmbed = ({id, title}) => { return